Mailing List Archive

Playback next steps
I have added support in MythTV playback for VAAPI2 (VAAPI plus
deinterlace plus ability to handle format changes) and Mediacodec. I
plan to do the same for NVIDIA CUDA/NVDEC.

However these new methods are passing images through main memory on
their way to the output and using the existing OpenGL code for
rendering, We need to pass images directly from decoder to graphics
output to improve efficiency and to support 4K video at 50 fps. The next
task will be to support this. I am not sure how this will work. The
existing OpenGL code in MythTV does a lot of processing of each image
and in order to achieve the required performance, some of this may have
to be avoided. It may entail a new type of lightweight opengl that
leaves out the extensive opengl processing of images currently
undertaken, or may require new types of video rendering.

My knowledge / experience with OpenGL is very little and I have found
the existing OpenGL code rather difficult to understand.

If anybody has information/ideas about how to proceed with this please
let me know.

Peter
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
Hi.

Images obtained from VAAPI do jot need to be copied into main memory to work with opengl. In fact you can directly use with opengl a VAAPI2 image handle. It's just like any other textures.

Just like we do vdpau, all those decoding backends can be directly used with opengl.

Get Firefox on Android
________________________________
From: Peter Bennett
Sent: Saturday, 1 December 2018 16:58
To: Development of MythTV
Cc: Mark Kendall; Don Carroll
Subject: [mythtv] Playback next steps

I have added support in MythTV playback for VAAPI2 (VAAPI plus
deinterlace plus ability to handle format changes) and Mediacodec. I
plan to do the same for NVIDIA CUDA/NVDEC.

However these new methods are passing images through main memory on
their way to the output and using the existing OpenGL code for
rendering, We need to pass images directly from decoder to graphics
output to improve efficiency and to support 4K video at 50 fps. The next
task will be to support this. I am not sure how this will work. The
existing OpenGL code in MythTV does a lot of processing of each image
and in order to achieve the required performance, some of this may have
to be avoided. It may entail a new type of lightweight opengl that
leaves out the extensive opengl processing of images currently
undertaken, or may require new types of video rendering.

My knowledge / experience with OpenGL is very little and I have found
the existing OpenGL code rather difficult to understand.

If anybody has information/ideas about how to proceed with this please
let me know.

Peter
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
On 12/1/18 5:22 PM, Jean-Yves Avenard wrote:
> Hi.
>
> Images obtained from VAAPI do jot need to be copied into main memory
> to work with opengl. In fact you can directly use with opengl a VAAPI2
> image handle. It's just like any other textures.
>
> Just like we do vdpau, all those decoding backends can be directly
> used with opengl.
>
Hi Jean-Yves

Certainly this is true, I just need to find out what APIs to use and how
to use them. I think that ffmpeg does not help with that side of things.
I probably have to resort to VA APIs, CUDA APIS, Android native apis,
etc. to do this. I will tackle theme one at a time.

Peter
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
On Sat, 1 Dec 2018 at 6:35 pm, Peter Bennett <pb.mythtv@gmail.com> wrote:

>
>
> On 12/1/18 5:22 PM, Jean-Yves Avenard wrote:
> > Hi.
> >
> > Images obtained from VAAPI do jot need to be copied into main memory
> > to work with opengl. In fact you can directly use with opengl a VAAPI2
> > image handle. It's just like any other textures.
> >
> > Just like we do vdpau, all those decoding backends can be directly
> > used with opengl.
> >
> Hi Jean-Yves
>
> Certainly this is true, I just need to find out what APIs to use and how
> to use them. I think that ffmpeg does not help with that side of things.
> I probably have to resort to VA APIs, CUDA APIS, Android native apis,
> etc. to do this. I will tackle theme one at a time.
>

Well, for the decoding side of things, you can use the hwaccel2 ffmpeg
wrapper, you then find the image graphic handle in one of the data field.
That leaves only the rendering side of things. All the decoding will be
identical for all platforms.

Happy to help if needed.

>
>
Re: Playback next steps [ In reply to ]
On Sat, Dec 01, 2018 at 04:58:16PM -0500, Peter Bennett wrote:
> I have added support in MythTV playback for VAAPI2 (VAAPI plus deinterlace
> plus ability to handle format changes) and Mediacodec. I plan to do the same
> for NVIDIA CUDA/NVDEC.
>
> However these new methods are passing images through main memory on their
> way to the output and using the existing OpenGL code for rendering, We need
> to pass images directly from decoder to graphics output to improve
> efficiency and to support 4K video at 50 fps. The next task will be to
> support this. I am not sure how this will work. The existing OpenGL code in
> MythTV does a lot of processing of each image and in order to achieve the
> required performance, some of this may have to be avoided. It may entail a
> new type of lightweight opengl that leaves out the extensive opengl
> processing of images currently undertaken, or may require new types of video
> rendering.
>
> My knowledge / experience with OpenGL is very little and I have found the
> existing OpenGL code rather difficult to understand.
>
> If anybody has information/ideas about how to proceed with this please let
> me know.

Aman probably has some helpful tips for mediacodec. Is there a reason
you didn't copy him?

Also regarding mediacodec, I still believe surface rendering of video
is the preferred method on Android. We've already been told that will
fix the multifreme corruption problem on interlaced video. I believe
OpenGL can/should still be used to render the OSD in such cases. I
hope that means only a small amount of new code needs to be added to
scale and shift the video.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
> Wiadomo?? napisana przez Peter Bennett <pb.mythtv@gmail.com> w dniu 01.12.2018, o godz. 22:58:
>
> I have added support in MythTV playback for VAAPI2 (VAAPI plus deinterlace plus ability to handle format changes) and Mediacodec. I plan to do the same for NVIDIA CUDA/NVDEC.
>
> However these new methods are passing images through main memory on their way to the output and using the existing OpenGL code for rendering, We need to pass images directly from decoder to graphics output to improve efficiency and to support 4K video at 50 fps. The next task will be to support this. I am not sure how this will work. The existing OpenGL code in MythTV does a lot of processing of each image and in order to achieve the required performance, some of this may have to be avoided. It may entail a new type of lightweight opengl that leaves out the extensive opengl processing of images currently undertaken, or may require new types of video rendering.
>
> My knowledge / experience with OpenGL is very little and I have found the existing OpenGL code rather difficult to understand.
>
> If anybody has information/ideas about how to proceed with this please let me know.
>
> Peter

Peter,

I’m really glad You bring this subject to dev discussion as IMHO this is critical to future of frontend(*) - if we effectivelly want to have frontend running(**) on anything other than x86…

From long-term architectural planing perspective I think future architecture should deal with discretion of: DPU, GPU and VPU (display processing unit; graphic processing unit and video processing unit).

95% non-x86 platforms have discretion of those 3 units - so next-gen architecture should correctly approach this.

Another aspect is OS support.
If we want to support non-x86 HW - OS is practically only: Linux & Android.

I think - despite good Android support(***) - in long-term myth should support Linux as Linux is IMHO much more resource effective, controllable (in software sense), opened and uniforming.

If there will be common agreement about future GO for non-x86 support via Linux path - then we can enter next stage: discussing right architecture. After that we can discuss implementation and next resources.


(*) - also future of MiniMyth2 where target is to have single ARMv7/AARCH64 images similar like current x86 image with network boot and zero-touch provisioning and full plug-n-play.

(**) - I mean really smooth playback of 4k@60p for H265/VP9

(***) - I played frontend on android 7.1 on my s905w tx3-mini and my experience is far from expected. Key issue is really jumpy playback despite 10-20% CPU load with mediacodec decoder





_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
Peter/David,

I've been digging around and playing with the OpenGL, VAAPI and OpenMax code.

Focusing on OpenGL for now...

I forked master last week - just to keep track of patches etc. You can see what I've been doing at:

https://github.com/mark-kendall/mythtv/commits/master

In summary so far for OpenGL:

- fixed UYVY kernel deinterlacer (I see that's already in master)
- fixed YV12 kernel deinterlacer (pretty sure linear blend is broken as well, it looks terrible)
- patched mythavtest to add double rate deinterlacing support (mythavtest is really useful for performance testing if you haven't used it before)
- some openglbicubic fixes
- minor improvement to the UYVY kernel deinterlacer
- fix for desktop OpenGL ES2

In the pipeline already:

- add support for NPOT textures on GLES2/2.0 - should save a lot of video memory on Pi/Android etc
- optimisations for UYVY and desktop GL2.0
- fix use of glTexImage1D - just use 2D instead (1D not available on ES2.0)

I've also started some extensive debugging/logging code for OpenGLVideo to show exactly what is happening under the hood - it's fairly invasive though.
Does that sound useful?

While digging around and trying to get EGL and OpenGLES2.0 working properly on my system, I noticed the comment about ES2.0 and OpenMax playback - and all the subsequent ifdeffery required to disable QT5 opengl support...

Not tested the theory yet, but I think the reason OpenMax fails with QT5 OpenGL/EGL is because Lawrence creates his own EGL render device for the OSD. If using eglfs, this will interfere with the existing Qt screen (I don't think you can create 2 EGL devices). The simple solution I think is to check the Qt QPA platform and disallow the EGL OSD in VideoOutputOMX if the platform is eglfs. This should allow you to remove the whole OPENGL_QT5 ifdef stuff - which would really clean things up and ensure as many people as possible actually use the ES2.0 renderer (with or without EGL).

The more involved solution is to fix VideoOutputOMX. At the moment Lawrence's code effectively assumes an X11 desktop. He uses the OMXVideoRender component to put images on screen (does that even work with eglfs?) and because of the approach has to handle all sorts of windowing issues/masks etc. He then doesn't like the softblend osd:) so creates an additional render device to display on top of the video.

A relatively simple solution is:
- for egl/fs, create VideoOutputOMXEGL (prob a sub-class of VideoOutputOpenGL) and replace the OMXVideoRender component with the Broadcom specific egl_render. EGL images transferred direct to screen and regular OpenGL OSD thrown in for free.
- for X11/desktop, I would actually remove the MythRenderEGL code and if they don't like the softblend osd, encourage them to use EGL...

There is also some broadcom specific code that is not properly ifdef'd out.

If I get the chance, I'm going to have a play with QT5/eglfs/OpenMax over Christmas.

Back to OpenGL proper, having got my head around the code again, I have a better idea of what is happening in the YV12 code - and can compare it to the other options.

Remember the aim of the game is to take a planar YUV420P/YV12 image in main memory and display it as a packed RGBA image on screen.
So there are three significant operations - repacking from planar to packed, transferring to video memory and YUV to RGB conversion - and just like skinning cats, there are multiple ways of doing it.
And remember that a YV12 image is 12bpp and full RGBA is 32bpp.

The simplest fallback route is to do the entire conversion in memory - repacking and colourspace conversion (note this should never actually happen with the current code):
CPU Load: High
GPU Load: Low
Memory transfer: High - 32bpp image transferred.
Colourspace control: None (using FFmpeg)
Availability: Always

The default option is to repack the frame into a full 32bit, packed format and perform colourspace conversion in the GPU. Repacking requires some custom code - interlaced material needs special handling.
CPU Load: Moderate with MMX support - all other platforms fall back to 'plain c'
GPU Load: Lowish - simple 1 texture sampling and colourspace control
Memory transfer: High - 32bpp
Colourspace control: Full
Availability: Always

The OpenGL 'Lite' route uses custom extensions in the GPU. Taking this route the video frame is repacked into a packed UYVY422 video frame, transferred to video memory and 'magically' converted to RGBA.
CPU Load: Moderate - repack from planar to packed.
GPU Load: ??
Memory transfer: Medium - image is 16bpp
Colourspace control: None
Availability: Variable

The custom UYVY code uses the same UYVY422 packed frame format and uses a custom texture format and shaders to convert to RGBA.
CPU Load: 'moderate' CPU load - repack
GPU Load: Medium - the packed frame only requires 1 texture sample per pixel (no deint) but does require an extra filter stage to ensure exact 1 to 1 mapping between input and output. Any horizontal interpolation breaks sampling (because 2 pixels are encapsulated in one RGBA sample). Video memory usage is lower as frame is half width.
Memory transfer: Medium - 16bpp
Colourspace control: Full
Availability: Always

The YV12 code is actually where I started about 10 years ago:) There is no repacking in main memory - the planar frame is transferred to video memory and repacked and converted to RGBA in the GPU. Sounds nice but...
CPU Load: Low to very low..
GPU Load: High to very high. Each output pixel requires 3 texture samples, 2 of which are non-contiguous - as the video data is still planar. For progressive content this is not too bad but deinterlacing gets ugly really quickly:) see below. Also the GLSL shader cannot use rectangular textures so requires more GPU memory - but I have a fix for that coming.
Memory transfer: Low - 12bpp
Colourspace control: Full
Availability: Always

Texture sampling is the most expensive operation in a GLSL shader - and accessing memory away from the current sample is usually more expensive. So it is best to minimise texture sampling and not to access texture memory 'randomly'.

With the software fallback, default, OpenGL lite and UYVY approach - there is only one, coherent texture sample for progressive content. For OpenGL deinterlacers this increases depending on the deinterlacer: linear blend makes 3 (2 non-contiguous) and kernel 8 (7 non-contiguous) - which is why it is slower.

With YV12 you start with 3 texture samples for progressive - which in my testing offsets the gain from very low CPU usage and memory transfer - but for the kernel deinterlacer that increases to 24 texture samples (21 non-contiguous).

... and that is why I tried to find an alternative. It's fine for progressive content but deinterlacing performance just gets worse and worse.

I settled on the UYVY code - it balances its 'performance' between CPU, memory transfer and GPU.

In summary:
software fallback - why bother unless you have a modern CPU and a 15 year old GPU.
default - custom packing code may not be efficient on non X86 architecture and large memory transfer
opengl-lite - nice if available but colour rendition not great.
UYVY - simple repacking, smaller memory transfer and lower GPU texturing.
YV12 - low CPU (straight copy), smallest memory transfer but worse to terrible GPU texturing.

The code could probably try and make some assumptions about the best route to take depending on reported driver/hardware and compile type. e.g. Intel desktop and Pi have shared CPU/GPU memory so memory transfers probably aren't a bottleneck. A more powerful dedicated video card proably won't blink at the sampling required for YV12. At the end of the day, however, there is no right or wrong solution - as long as it works!

Again, hopefully this is helpful. Any questions, just ask.

Regards
Mark

P.S. Probably worth mentioning that I don't really think the code needs both UYVY and YV12 - and unsurprisingly I would suggest ditching YV12. At the same time the OpenGL code could be simplified greatly by removing OpenGL1 support - I'd be amazed if anyone is actually still using it.


_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
On Fri, Dec 14, 2018 at 03:16:59PM +0000, MARK KENDALL wrote:
> Peter/David,
>
> I've been digging around and playing with the OpenGL, VAAPI and OpenMax code.
>
> Focusing on OpenGL for now...
>
> I forked master last week - just to keep track of patches etc. You can see what I've been doing at:
>
> https://github.com/mark-kendall/mythtv/commits/master
>
> In summary so far for OpenGL:
>
> - fixed UYVY kernel deinterlacer (I see that's already in master)

Yes, I used that change for a couple of weeks and committed earlier
this week.

I also opened the following ticket regarding some other commits you
cemmented on.

https://code.mythtv.org/trac/ticket/13358

See below for some other info regarding commit [59acf5c1].

> - fixed YV12 kernel deinterlacer (pretty sure linear blend is broken as well, it looks terrible)

Thanks again. Yeah, I'd questioned in another thread this week why
liear blend was or wan't also affected.

> - patched mythavtest to add double rate deinterlacing support (mythavtest is really useful for performance testing if you haven't used it before)

I didn't realize mythavtest was rotting. I thought it was a "simpler"
wrapper for tv_play/mythplayer without the rest of the frontend. Is
that not the case?

> - some openglbicubic fixes
> - minor improvement to the UYVY kernel deinterlacer
> - fix for desktop OpenGL ES2
>
> In the pipeline already:

You've been busy! :)

> - add support for NPOT textures on GLES2/2.0 - should save a lot of video memory on Pi/Android etc
> - optimisations for UYVY and desktop GL2.0
> - fix use of glTexImage1D - just use 2D instead (1D not available on ES2.0)
>
> I've also started some extensive debugging/logging code for OpenGLVideo to show exactly what is happening under the hood - it's fairly invasive though.
> Does that sound useful?
>
> While digging around and trying to get EGL and OpenGLES2.0 working properly on my system, I noticed the comment about ES2.0 and OpenMax playback - and all the subsequent ifdeffery required to disable QT5 opengl support...
>
> Not tested the theory yet, but I think the reason OpenMax fails with QT5 OpenGL/EGL is because Lawrence creates his own EGL render device for the OSD. If using eglfs, this will interfere with the existing Qt screen (I don't think you can create 2 EGL devices). The simple solution I think is to check the Qt QPA platform and disallow the EGL OSD in VideoOutputOMX if the platform is eglfs. This should allow you to remove the whole OPENGL_QT5 ifdef stuff - which would really clean things up and ensure as many people as possible actually use the ES2.0 renderer (with or without EGL).
>
> The more involved solution is to fix VideoOutputOMX. At the moment Lawrence's code effectively assumes an X11 desktop. He uses the OMXVideoRender component to put images on screen (does that even work with eglfs?) and because of the approach has to handle all sorts of windowing issues/masks etc. He then doesn't like the softblend osd:) so creates an additional render device to display on top of the video.
>
> A relatively simple solution is:
> - for egl/fs, create VideoOutputOMXEGL (prob a sub-class of VideoOutputOpenGL) and replace the OMXVideoRender component with the Broadcom specific egl_render. EGL images transferred direct to screen and regular OpenGL OSD thrown in for free.
> - for X11/desktop, I would actually remove the MythRenderEGL code and if they don't like the softblend osd, encourage them to use EGL...
>
> There is also some broadcom specific code that is not properly ifdef'd out.
>
> If I get the chance, I'm going to have a play with QT5/eglfs/OpenMax over Christmas.
>
> Back to OpenGL proper, having got my head around the code again, I have a better idea of what is happening in the YV12 code - and can compare it to the other options.
>
> Remember the aim of the game is to take a planar YUV420P/YV12 image in main memory and display it as a packed RGBA image on screen.
> So there are three significant operations - repacking from planar to packed, transferring to video memory and YUV to RGB conversion - and just like skinning cats, there are multiple ways of doing it.
> And remember that a YV12 image is 12bpp and full RGBA is 32bpp.
>
> The simplest fallback route is to do the entire conversion in memory - repacking and colourspace conversion (note this should never actually happen with the current code):
> CPU Load: High
> GPU Load: Low
> Memory transfer: High - 32bpp image transferred.
> Colourspace control: None (using FFmpeg)
> Availability: Always
>
> The default option is to repack the frame into a full 32bit, packed format and perform colourspace conversion in the GPU. Repacking requires some custom code - interlaced material needs special handling.
> CPU Load: Moderate with MMX support - all other platforms fall back to 'plain c'
> GPU Load: Lowish - simple 1 texture sampling and colourspace control
> Memory transfer: High - 32bpp
> Colourspace control: Full
> Availability: Always
>
> The OpenGL 'Lite' route uses custom extensions in the GPU. Taking this route the video frame is repacked into a packed UYVY422 video frame, transferred to video memory and 'magically' converted to RGBA.
> CPU Load: Moderate - repack from planar to packed.
> GPU Load: ??
> Memory transfer: Medium - image is 16bpp
> Colourspace control: None
> Availability: Variable
>
> The custom UYVY code uses the same UYVY422 packed frame format and uses a custom texture format and shaders to convert to RGBA.
> CPU Load: 'moderate' CPU load - repack
> GPU Load: Medium - the packed frame only requires 1 texture sample per pixel (no deint) but does require an extra filter stage to ensure exact 1 to 1 mapping between input and output. Any horizontal interpolation breaks sampling (because 2 pixels are encapsulated in one RGBA sample). Video memory usage is lower as frame is half width.
> Memory transfer: Medium - 16bpp
> Colourspace control: Full
> Availability: Always
>
> The YV12 code is actually where I started about 10 years ago:) There is no repacking in main memory - the planar frame is transferred to video memory and repacked and converted to RGBA in the GPU. Sounds nice but...
> CPU Load: Low to very low..
> GPU Load: High to very high. Each output pixel requires 3 texture samples, 2 of which are non-contiguous - as the video data is still planar. For progressive content this is not too bad but deinterlacing gets ugly really quickly:) see below. Also the GLSL shader cannot use rectangular textures so requires more GPU memory - but I have a fix for that coming.
> Memory transfer: Low - 12bpp
> Colourspace control: Full
> Availability: Always
>
> Texture sampling is the most expensive operation in a GLSL shader - and accessing memory away from the current sample is usually more expensive. So it is best to minimise texture sampling and not to access texture memory 'randomly'.
>
> With the software fallback, default, OpenGL lite and UYVY approach - there is only one, coherent texture sample for progressive content. For OpenGL deinterlacers this increases depending on the deinterlacer: linear blend makes 3 (2 non-contiguous) and kernel 8 (7 non-contiguous) - which is why it is slower.
>
> With YV12 you start with 3 texture samples for progressive - which in my testing offsets the gain from very low CPU usage and memory transfer - but for the kernel deinterlacer that increases to 24 texture samples (21 non-contiguous).
>
> ... and that is why I tried to find an alternative. It's fine for progressive content but deinterlacing performance just gets worse and worse.
>
> I settled on the UYVY code - it balances its 'performance' between CPU, memory transfer and GPU.
>
> In summary:
> software fallback - why bother unless you have a modern CPU and a 15 year old GPU.
> default - custom packing code may not be efficient on non X86 architecture and large memory transfer
> opengl-lite - nice if available but colour rendition not great.
> UYVY - simple repacking, smaller memory transfer and lower GPU texturing.
> YV12 - low CPU (straight copy), smallest memory transfer but worse to terrible GPU texturing.
>
> The code could probably try and make some assumptions about the best route to take depending on reported driver/hardware and compile type. e.g. Intel desktop and Pi have shared CPU/GPU memory so memory transfers probably aren't a bottleneck. A more powerful dedicated video card proably won't blink at the sampling required for YV12. At the end of the day, however, there is no right or wrong solution - as long as it works!
>
> Again, hopefully this is helpful. Any questions, just ask.

Thanks for the analysis of the various trade-offs. It's helpful to
this graphics neophyte. Peter will hopefully jump in soon.

> Regards
> Mark
>
> P.S. Probably worth mentioning that I don't really think the code needs both UYVY and YV12 - and unsurprisingly I would suggest ditching YV12. At the same time the OpenGL code could be simplified greatly by removing OpenGL1 support - I'd be amazed if anyone is actually still using it.

There are some platforms like the second generation firetv stick and
new, mi box, that are limited in some way such that uyvy doesn't work
acceptably. YV12 currently works on them for some definitions of
work. I don't understand the issue well enough to explain it further
so I will defer to Peter to do that. I am curious, though, how/if the
other options you give would work on them.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
Mark

I wish I understood everything you are saying here. I have been making
small changes to the OpenGL code and keeping what works, without full
understanding of what is going on. I will add some notes below as to
specific things that I have done or found out.

On 12/14/18 10:16 AM, MARK KENDALL wrote:
> Peter/David,
>
> I've been digging around and playing with the OpenGL, VAAPI and OpenMax code.
>
> Focusing on OpenGL for now...
>
> I forked master last week - just to keep track of patches etc. You can see what I've been doing at:
>
> https://github.com/mark-kendall/mythtv/commits/master
>
> In summary so far for OpenGL:
>
> - fixed UYVY kernel deinterlacer (I see that's already in master)
> - fixed YV12 kernel deinterlacer (pretty sure linear blend is broken as well, it looks terrible)
Linear blend looks fine to me with YV12. That is what I use. Maybe I am
missing something.
One thing to note - using kernel on some android devices has a
performance impact, it starts dropping frames - that does not happen
with linear blend.
> - patched mythavtest to add double rate deinterlacing support (mythavtest is really useful for performance testing if you haven't used it before)
> - some openglbicubic fixes
> - minor improvement to the UYVY kernel deinterlacer
> - fix for desktop OpenGL ES2
>
> In the pipeline already:
>
> - add support for NPOT textures on GLES2/2.0 - should save a lot of video memory on Pi/Android etc
> - optimisations for UYVY and desktop GL2.0
> - fix use of glTexImage1D - just use 2D instead (1D not available on ES2.0)
>
> I've also started some extensive debugging/logging code for OpenGLVideo to show exactly what is happening under the hood - it's fairly invasive though.
> Does that sound useful?
It sounds useful to me. At one point I temporarily added some debug code
to print the OpenGL shader code used. The MythTV source applies many
dynamic changes to the shader code before sending it to OpenGL, and it
was difficult for me to know what the code that actually ran looked like.
> While digging around and trying to get EGL and OpenGLES2.0 working properly on my system, I noticed the comment about ES2.0 and OpenMax playback - and all the subsequent ifdeffery required to disable QT5 opengl support...
>
> Not tested the theory yet, but I think the reason OpenMax fails with QT5 OpenGL/EGL is because Lawrence creates his own EGL render device for the OSD. If using eglfs, this will interfere with the existing Qt screen (I don't think you can create 2 EGL devices). The simple solution I think is to check the Qt QPA platform and disallow the EGL OSD in VideoOutputOMX if the platform is eglfs. This should allow you to remove the whole OPENGL_QT5 ifdef stuff - which would really clean things up and ensure as many people as possible actually use the ES2.0 renderer (with or without EGL).
>
> The more involved solution is to fix VideoOutputOMX. At the moment Lawrence's code effectively assumes an X11 desktop. He uses the OMXVideoRender component to put images on screen (does that even work with eglfs?) and because of the approach has to handle all sorts of windowing issues/masks etc. He then doesn't like the softblend osd:) so creates an additional render device to display on top of the video.
Some of this I did. Lawrence left the project abruptly and I got the
Raspberry Pi code to where it would compile and run under X11. His code
originally was for full screen OpenGL ES and required a customized QT
build. Some people did not like the softblend so I did some strange
stuff with the OpenGL ES OSD. What we have now is x11 based QT
displaying the GUI in a X11 window, OpenMAX video displaying the
playback using the full-screen OpenMAX API, and OpenGL ES displaying the
OSD using the OpenGL ES full-screen API. For OpenMAX and OpenGL it does
some calculations to put the video and the OSD into the correct place to
position it on the QT window, to give the illusion that it is actually
all in a window.

In versions of Raspberry Pi Raspbian starting from 2018, using the
OpenGL ES OSD causes severe slowdowns in the video playback, even if
nothing is visible in the OSD, so it has become useless and we reverted
to the softblend OSD.

Piotr O has his own build of Raspberry Pi Mythfrontend which uses
full-screen QT specially built for the purpose. I don't know how that
all works.
> A relatively simple solution is:
> - for egl/fs, create VideoOutputOMXEGL (prob a sub-class of VideoOutputOpenGL) and replace the OMXVideoRender component with the Broadcom specific egl_render. EGL images transferred direct to screen and regular OpenGL OSD thrown in for free.
> - for X11/desktop, I would actually remove the MythRenderEGL code and if they don't like the softblend osd, encourage them to use EGL...
We had thought to switch to the new "experimental" OpenGL for the
raspberry pi, which is an OpenGL implementation in X11 with gpu
acceleration. I compiled MyhthTV with suitable config settings to work
this way, and it was able to play video through OpenGL, then suddenly it
stopped working and would segfault every time I started it up. I never
figured out what was happening, the segfault was in QT event handling
right after initializing OpenGL.
> There is also some broadcom specific code that is not properly ifdef'd out.
>
> If I get the chance, I'm going to have a play with QT5/eglfs/OpenMax over Christmas.
>
> Back to OpenGL proper, having got my head around the code again, I have a better idea of what is happening in the YV12 code - and can compare it to the other options.
>
> Remember the aim of the game is to take a planar YUV420P/YV12 image in main memory and display it as a packed RGBA image on screen.
> So there are three significant operations - repacking from planar to packed, transferring to video memory and YUV to RGB conversion - and just like skinning cats, there are multiple ways of doing it.
> And remember that a YV12 image is 12bpp and full RGBA is 32bpp.
>
> The simplest fallback route is to do the entire conversion in memory - repacking and colourspace conversion (note this should never actually happen with the current code):
> CPU Load: High
> GPU Load: Low
> Memory transfer: High - 32bpp image transferred.
> Colourspace control: None (using FFmpeg)
> Availability: Always
>
> The default option is to repack the frame into a full 32bit, packed format and perform colourspace conversion in the GPU. Repacking requires some custom code - interlaced material needs special handling.
> CPU Load: Moderate with MMX support - all other platforms fall back to 'plain c'
> GPU Load: Lowish - simple 1 texture sampling and colourspace control
> Memory transfer: High - 32bpp
> Colourspace control: Full
> Availability: Always
>
> The OpenGL 'Lite' route uses custom extensions in the GPU. Taking this route the video frame is repacked into a packed UYVY422 video frame, transferred to video memory and 'magically' converted to RGBA.
> CPU Load: Moderate - repack from planar to packed.
> GPU Load: ??
> Memory transfer: Medium - image is 16bpp
> Colourspace control: None
> Availability: Variable
>
> The custom UYVY code uses the same UYVY422 packed frame format and uses a custom texture format and shaders to convert to RGBA.
> CPU Load: 'moderate' CPU load - repack
> GPU Load: Medium - the packed frame only requires 1 texture sample per pixel (no deint) but does require an extra filter stage to ensure exact 1 to 1 mapping between input and output. Any horizontal interpolation breaks sampling (because 2 pixels are encapsulated in one RGBA sample). Video memory usage is lower as frame is half width.
> Memory transfer: Medium - 16bpp
> Colourspace control: Full
> Availability: Always
Note there is an android problem with UYVY. In some devices (e.g. fire
stick g2), OpenGL ES does not support float precision highp and defaults
to mediump. The OpenGL code that applies the color suffers a rounding
error and instead of each pixel getting its correct color, each
alternate pixel gets the color for its neighbor instead, on the right
hand half of the screen. See https://imgur.com/dLoMUau and
https://imgur.com/lbfyEWQ . I don't know why YV12 does not suffer from
that problem.

> The YV12 code is actually where I started about 10 years ago:) There is no repacking in main memory - the planar frame is transferred to video memory and repacked and converted to RGBA in the GPU. Sounds nice but...
> CPU Load: Low to very low..
> GPU Load: High to very high. Each output pixel requires 3 texture samples, 2 of which are non-contiguous - as the video data is still planar. For progressive content this is not too bad but deinterlacing gets ugly really quickly:) see below. Also the GLSL shader cannot use rectangular textures so requires more GPU memory - but I have a fix for that coming.
> Memory transfer: Low - 12bpp
> Colourspace control: Full
> Availability: Always
>
> Texture sampling is the most expensive operation in a GLSL shader - and accessing memory away from the current sample is usually more expensive. So it is best to minimise texture sampling and not to access texture memory 'randomly'.
Something in the OpenGL is slow enough to impact playback on fire stick
g2. Many frames are dropped because the OpenGL seems to be taking longer
that 1 frame interval to execute, including with progressive frames.
> With the software fallback, default, OpenGL lite and UYVY approach - there is only one, coherent texture sample for progressive content. For OpenGL deinterlacers this increases depending on the deinterlacer: linear blend makes 3 (2 non-contiguous) and kernel 8 (7 non-contiguous) - which is why it is slower.
>
> With YV12 you start with 3 texture samples for progressive - which in my testing offsets the gain from very low CPU usage and memory transfer - but for the kernel deinterlacer that increases to 24 texture samples (21 non-contiguous).
>
> ... and that is why I tried to find an alternative. It's fine for progressive content but deinterlacing performance just gets worse and worse.
>
> I settled on the UYVY code - it balances its 'performance' between CPU, memory transfer and GPU.
>
> In summary:
> software fallback - why bother unless you have a modern CPU and a 15 year old GPU.
Some reasons for using software decode
- VDPAU has a bug with decoding MPEG2 that results in pixellation on
many USA stations.
- fire stick 2g mediacodec has a bug where deinterlaced content causes
the decoder to hang.
- Subtitles are not working for some decoders. They work with software
decoding, for those people who need them.
> default - custom packing code may not be efficient on non X86 architecture and large memory transfer
> opengl-lite - nice if available but colour rendition not great.
> UYVY - simple repacking, smaller memory transfer and lower GPU texturing.
> YV12 - low CPU (straight copy), smallest memory transfer but worse to terrible GPU texturing.
Except on fire tv 2g and the like where UYVY is terrible and YV12 seems
fine.
> The code could probably try and make some assumptions about the best route to take depending on reported driver/hardware and compile type. e.g. Intel desktop and Pi have shared CPU/GPU memory so memory transfers probably aren't a bottleneck. A more powerful dedicated video card proably won't blink at the sampling required for YV12. At the end of the day, however, there is no right or wrong solution - as long as it works!
>
> Again, hopefully this is helpful. Any questions, just ask.
>
> Regards
> Mark
>
> P.S. Probably worth mentioning that I don't really think the code needs both UYVY and YV12 - and unsurprisingly I would suggest ditching YV12. At the same time the OpenGL code could be simplified greatly by removing OpenGL1 support - I'd be amazed if anyone is actually still using it.
>
>

Peter
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
On Fri, Dec 14, 2018 at 03:16:59PM +0000, MARK KENDALL wrote:
> I forked master last week - just to keep track of patches etc. You can see what I've been doing at:
>
> https://github.com/mark-kendall/mythtv/commits/master
>
> In summary so far for OpenGL:
>
> - fixed UYVY kernel deinterlacer (I see that's already in master)
> - fixed YV12 kernel deinterlacer (pretty sure linear blend is broken as well, it looks terrible)
> - patched mythavtest to add double rate deinterlacing support (mythavtest is really useful for performance testing if you haven't used it before)
> - some openglbicubic fixes
> - minor improvement to the UYVY kernel deinterlacer
> - fix for desktop OpenGL ES2

I tested your changes on my nvidia shield this evening with a little
extra focus on YV12. I'll try on my firetv sticks this weekend. In
short, after it compiled, everything worked well including one
pleasant and interesting surprise. Here are some of the details.

Android doesn't have a #define for GL_RGBA16 so I had to add another
workaround for it to the existing ones in mythrender_opengl_defs.h.

There was no green line with YV12 like before. Either your changes or
another one must have fixed that.

The YV12 kernel deinterlacer worked fine. Here's where the surprise
was.

I like to watch some things with timestretch, sometimes with a lot of
timestretch. One of those things is bicycle racing. At 2x
timestretch, it can be quite the decoding and deinterlacing torture
test with lots of individual cyclists and moving scenery. It's even
more tortuous for races on one of my h.264/1080i channels.

My old, Linux/vdpau frontend had no problem with it but my shield has
never been able to handle it without slight to moderate stuttering.
With mediacodec decoding, it wasn't watchable and required backing
down quite a bit on the timestretch. With software decoding, uyvy
pixels and linear blend deinterlacing it could almost keep up. A lot
of motion would cause stuttering. Tonight, with software decoding,
yv12 pixels and kernel deinterlacing, it kept up just fine!

I belive you said deinterlacing was the big problem area for yv12. I
don't doubt you on that. However, it seems the savings from moving
less data around more than makes up for the loss from more complicated
deinterlacing.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
On 14/12/2018 22:09, Peter Bennett wrote:
>> A relatively simple solution is:
>> - for egl/fs, create VideoOutputOMXEGL (prob a sub-class of
>> VideoOutputOpenGL) and replace the OMXVideoRender component with the
>> Broadcom specific egl_render. EGL images transferred direct to screen
>> and regular OpenGL OSD thrown in for free.
>> - for X11/desktop, I would actually remove the MythRenderEGL code and
>> if they don't like the softblend osd, encourage them to use EGL...
> We had thought to switch to the new "experimental" OpenGL for the
> raspberry pi, which is an OpenGL implementation in X11 with gpu
> acceleration. I compiled MyhthTV with suitable config settings to work
> this way, and it was able to play video through OpenGL, then suddenly it
> stopped working and would segfault every time I started it up. I never
> figured out what was happening, the segfault was in QT event handling
> right after initializing OpenGL.

Every time I saw this happen it was because it was loading the OpenGL
libraries for the wrong implmentation of OpenGL.

ie. loading the broadcom based opengl libs (aka the original rpi stuff)
rather than the newer libraries which support the newer experimental
opengl.


Regards
Stuart
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
> --- On Fri, 14/12/18, David Engel <david@istwok.net> wrote:
>
> > I didn't
> > realize mythavtest was rotting. I thought it was a
> > "simpler"
> > wrapper for
> > tv_play/mythplayer without the rest of the frontend. Is
> > that not the case?
>

No - it isn't rotting, I just extended it a little - for double rate
deinterlacing and gpu decoder usage.
I wrote mythavtest to just play video at the fastest possible rate (using
the current display profile) with no audio and no a/v sync. You need to
turn off any sync to vblank that the driver may be using. When using
opengl, there are also a number of environment variables that you can set
to disable various functionality to test different code paths.


> > There are some platforms like the second
> > generation firetv stick and
> > new, mi box,
> > that are limited in some way such that uyvy doesn't
> > work
> > acceptably. YV12 currently works on
> > them for some definitions of
> > work.
>

I suspect it is either the extra CPU processing or the extra framebuffer
stage that is currently needed for UYVY.

On Fri, 14/12/18, Peter Bennett <pb.mythtv@gmail.com> wrote:
>
Linear blend looks fine to
> me with YV12. That is what I use. Maybe I am
> missing something.
> One thing to
> note - using kernel on some android devices has a
> performance impact, it starts dropping frames -
> that does not happen
> with linear blend.
>

YV12 linear blend is currently not double rate at all - so presumably you
are only running single rate?
As discussed later, kernel is much more GPU intensive - so not surprising
that it starts to drop frames.


> It sounds useful to me. At one point I
> temporarily added some debug code
> to print
> the OpenGL shader code used. The MythTV source applies many
>
> dynamic changes to the shader code before
> sending it to OpenGL, and it
> was difficult
> for me to know what the code that actually ran looked
> like.
>

The full and final shader code is already dumped to the logs with debug
level logging.

His code originally was for full screen OpenGL ES
> and required a customized QT
> build. Some
> people did not like the softblend so I did some strange
> stuff with the OpenGL ES OSD. What we have now
> is x11 based QT
> displaying the GUI in a X11
> window, OpenMAX video displaying the
> playback using the full-screen OpenMAX API, and
> OpenGL ES displaying the
> OSD using the
> OpenGL ES full-screen API. For OpenMAX and OpenGL it does
>
> some calculations to put the video and the
> OSD into the correct place to
> position it
> on the QT window, to give the illusion that it is actually
>
> all in a window.
>
> In versions of Raspberry Pi Raspbian starting
> from 2018, using the
> OpenGL ES OSD causes
> severe slowdowns in the video playback, even if
> nothing is visible in the OSD, so it has become
> useless and we reverted
> to the softblend
> OSD.
>
> Piotr O has his own
> build of Raspberry Pi Mythfrontend which uses
> full-screen QT specially built for the purpose.
> I don't know how that
> all works.
>

I'm currently compiling master with an up to date raspbian stretch lite
with slight modifications to enable ES2.0 and QT5 opengl (both are
currently disabled if not an android build).

I noticed that the rasbpian qt5 is built with eglfs/egl support - will see
whether that actually means broadcom specific eglfs - I don't think it does
- in which case I'll cross compile with the correct configuration.
Qt5 eglfs should just work without issue (it works on my debian/mesa
desktop). There is then work to be done to use the OpenMax egl code when
using eglfs and use the 'regular' opengl osd. For x11, we should stick with
the current approach and disable the EGL OSD if running with eglfs.

I think there are also some fixes needed to the configure script. At the
moment I think it looks for gles2.0 support in the qt spec to enable
gles2.0 - but gles2.0 support appears to be available when egl is
available. Not sure of the correct permutations here - and the configure
script makes my brain melt.

I suspect that the best performance (and memory consumption etc) will only
be achieved if using straight eglfs - i.e. no X11. It's a shame no-one
provides a PPA for the right QT5.

>
> > software fallback - why bother unless you
> have a modern CPU and a 15 year old GPU.
> Some reasons for using software decode
> - VDPAU has a bug with decoding MPEG2 that
> results in pixellation on
> many USA
> stations.
> - fire stick 2g mediacodec has a
> bug where deinterlaced content causes
> the
> decoder to hang.
> - Subtitles are not working
> for some decoders. They work with software
> decoding, for those people who need them.
>

To be clear, this is in reference to software fallback for YUV to RGB
conversion - this already assumes software decode and is not related to
hardware accelerated decode.

Note there is an android problem with UYVY. In
> some devices (e.g. fire
> stick g2), OpenGL
> ES does not support float precision highp and defaults
> to mediump. The OpenGL code that applies the
> color suffers a rounding
> error and instead
> of each pixel getting its correct color, each
> alternate pixel gets the color for its neighbor
> instead, on the right
> hand half of the
> screen. See https://imgur.com/dLoMUau and
> https://imgur.com/lbfyEWQ . I
> don't know why YV12 does not suffer from
> that problem.
>

Yes - definitely a precision issue - but is it always exactly half way
across the screen or at different positions? If always exactly half way
regardless of source width, it sounds more like an 'off by one' error.

But the issue may be moot...

Since I wrote that summary, I've realised that both YV12 and and UYVY are
subject to the 'interlaced chroma upsampling' bug. I fixed this with the
default, pre-UYVY code and then obviously forgot about it:)

The fix for UYVY is pretty straightforward. Don't use FFmpeg for upsampling
(known issue with FFmpeg), use the Mythtv packing code and don't try and
pack two pixels into one sample. Actually simplifies the code and removes
the need for the extra filter stage. Would benefit from some NEON specific
code - then most platforms will have SIMD optimisations (I doubt there are
many platforms not covered by either MMX or NEON).

Not so sure about YV12. I think the shaders are trying to deal with it. But
it doesn't work and it is also resampling progressive content as well. If
the frame is sampled for interlaced chroma, it needs proper repacking
regardless of whether it is deinterlaced or not - and progressive content
needs 'regular' resampling/packing.

With the UYVY change in mind, I'd actually propose the following:-

Drop support for the software YUV to RGB case.
- it only actually works with OpenGL1 as the only requirement for GL2/ES2
to work is shaders - which are mandatory per the spec.

Drop the opengl-lite option
- if the UYVY code is changed, I can't see any benefit to the lite code.
The shaders are simple and using the apple/arb extensions for lite just
means you lose all colourspace, colour standard control.

This then leaves you with UYVY and YV12 - which I would change to
profiles(OpenGL-YV12 and OpenGL-UYVY) rather than settings per se. Makes
the interface cleaner (no settings) and you essentially have one profile
that is more CPU/memory transfer costly (UYVY) and one that is more GPU
costly (YV12). Users can then decide which works best for their system.

The 'extra filter stage' could no doubt go (if still needed) after UYVY
changes.

thoughts?

regards
Mark
Re: Playback next steps [ In reply to ]
On 12/17/18 6:21 AM, Mark Kendall wrote:
>
> --- On Fri, 14/12/18, David Engel <david@istwok.net
> <mailto:david@istwok.net>> wrote:
>
> > I didn't
> > realize mythavtest was rotting.  I thought it was a
> > "simpler"
> > wrapper for
> > tv_play/mythplayer without the rest of the frontend.  Is
> > that not the case?
>
>
> No - it isn't rotting, I just extended it a little - for double rate
> deinterlacing and gpu decoder usage.
> I wrote mythavtest to just play video at the fastest possible rate
> (using the current display profile) with no audio and no a/v sync. You
> need to turn off any sync to vblank that the driver may be using. When
> using opengl, there are also a number of environment variables that
> you can set to disable various functionality to test different code paths.
>
> > There are some platforms like the second
> > generation firetv stick and
> > new, mi box,
> > that are limited in some way such that uyvy doesn't
> > work
> > acceptably.  YV12 currently works on
> > them for some definitions of
> > work.
>
>
> I suspect it is either the extra CPU processing or the extra
> framebuffer stage that is currently needed for UYVY.
>
> On Fri, 14/12/18, Peter Bennett <pb.mythtv@gmail.com
> <mailto:pb.mythtv@gmail.com>> wrote:
>
> Linear blend looks fine to
>  me with YV12. That is what I use. Maybe I am
>  missing something.
>  One thing to
>  note - using kernel on some android devices has a
>  performance impact, it starts dropping frames -
>  that does not happen
>  with linear blend.
>
>
> YV12 linear blend is currently not double rate at all - so presumably
> you are only running single rate?
Hmm - I selected Linear blend 2X and it seemed to be better than plain
linear blend. Maybe not?
> As discussed later, kernel is much more GPU intensive - so not
> surprising that it starts to drop frames.
>
>
> It sounds useful to me. At one point I
>  temporarily added some debug code
>  to print
>  the OpenGL shader code used. The MythTV source applies many
>
>  dynamic changes to the shader code before
>  sending it to OpenGL, and it
>  was difficult
>  for me to know what the code that actually ran looked
>  like.
>
>
> The full and final shader code is already dumped to the logs with
> debug level logging.
>
> His code originally was for full screen OpenGL ES
>  and required a customized QT
>  build. Some
>  people did not like the softblend so I did some strange
>  stuff with the OpenGL ES OSD. What we have now
>  is x11 based QT
>  displaying the GUI in a X11
>  window, OpenMAX video displaying the
>  playback using the full-screen OpenMAX API, and
>  OpenGL ES displaying the
>  OSD using the
>  OpenGL ES full-screen API. For OpenMAX and OpenGL it does
>
>  some calculations to put the video and the
>  OSD into the correct place to
>  position it
>  on the QT window, to give the illusion that it is actually
>
>  all in a window.
>
>  In versions of Raspberry Pi Raspbian starting
>  from 2018, using the
>  OpenGL ES OSD causes
>  severe slowdowns in the video playback, even if
>  nothing is visible in the OSD, so it has become
>  useless and we reverted
>  to the softblend
>  OSD.
>
>  Piotr O has his own
>  build of Raspberry Pi Mythfrontend which uses
>  full-screen QT specially built for the purpose.
>  I don't know how that
>  all works.
>
>
> I'm currently compiling master with an up to date raspbian stretch
> lite with slight modifications to enable ES2.0 and QT5 opengl (both
> are currently disabled if not an android build).
>
> I noticed that the rasbpian qt5 is built with eglfs/egl support - will
> see whether that actually means broadcom specific eglfs - I don't
> think it does - in which case I'll cross compile with the correct
> configuration.
> Qt5 eglfs should just work without issue (it works on my debian/mesa
> desktop). There is then work to be done to use the OpenMax egl code
> when using eglfs and use the 'regular' opengl osd. For x11, we should
> stick with the current approach and disable the EGL OSD if running
> with eglfs.
>
> I think there are also some fixes needed to the configure script. At
> the moment I think it looks for gles2.0 support in the qt spec to
> enable gles2.0 - but gles2.0 support appears to be available when egl
> is available. Not sure of the correct permutations here - and the
> configure script makes my brain melt.
>
> I suspect that the best performance (and memory consumption etc) will
> only be achieved if using straight eglfs - i.e. no X11. It's a shame
> no-one provides a PPA for the right QT5.
>
>
> > software fallback - why bother unless you
>  have a modern CPU and a 15 year old GPU.
>  Some reasons for using software decode
>  - VDPAU has a bug with decoding MPEG2 that
>  results in pixellation on
>  many USA
>  stations.
>  - fire stick 2g mediacodec has a
>  bug where deinterlaced content causes
>  the
>  decoder to hang.
>  - Subtitles are not working
>  for some decoders. They work with software
>  decoding, for those people who need them.
>
>
> To be clear, this is in reference to software fallback for YUV to RGB
> conversion - this already assumes software decode and is not related
> to hardware accelerated decode.
>
>  Note there is an android problem with UYVY. In
>  some devices (e.g. fire
>  stick g2), OpenGL
>  ES does not support float precision highp and defaults
>  to mediump. The OpenGL code that applies the
>  color suffers a rounding
>  error and instead
>  of each pixel getting its correct color, each
>  alternate pixel gets the color for its neighbor
>  instead, on the right
>  hand half of the
>  screen. See https://imgur.com/dLoMUau and
> https://imgur.com/lbfyEWQ . I
>  don't know why YV12 does not suffer from
>  that problem.
>
>
> Yes - definitely a precision issue - but is it always exactly half way
> across the screen or at different positions? If always exactly half
> way regardless of source width, it sounds more like an 'off by one' error.
>
I think it is actually about two thirds of the way across the screen
that it starts. Other details -
The jaggies occur with both interlaced and progressive video.
In file mythrender_opengl2es.h there is the line
m_qualifiers = "precision highp float;\n";
This was previously set as mediump - and the jaggies occurred on the
fire TV  and fire TV 4k.
After I changed this to highp -
On fire TV there was an error message that highp is not supported and it
is reverting to mediump, and the jaggies remain
On fire TV 4K the jaggies went away and the picture is perfect
> But the issue may be moot...
>
> Since I wrote that summary, I've realised that both YV12 and and UYVY
> are subject to the 'interlaced chroma upsampling' bug. I fixed this
> with the default, pre-UYVY code and then obviously forgot about it:)
>
> The fix for UYVY is pretty straightforward. Don't use FFmpeg for
> upsampling (known issue with FFmpeg), use the Mythtv packing code and
> don't try and pack two pixels into one sample. Actually simplifies the
> code and removes the need for the extra filter stage. Would benefit
> from some NEON specific code - then most platforms will have SIMD
> optimisations (I doubt there are many platforms not covered by either
> MMX or NEON).
>
> Not so sure about YV12. I think the shaders are trying to deal with
> it. But it doesn't work and it is also resampling progressive content
> as well. If the frame is sampled for interlaced chroma, it needs
> proper repacking regardless of whether it is deinterlaced or not - and
> progressive content needs 'regular' resampling/packing.
>
> With the UYVY change in mind, I'd actually propose the following:-
>
> Drop support for the software YUV to RGB case.
>  - it only actually works with OpenGL1 as the only requirement for
> GL2/ES2 to work is shaders - which are mandatory per the spec.
>
> Drop the opengl-lite option
>   - if the UYVY code is changed, I can't see any benefit to the lite
> code. The shaders are simple and using the apple/arb extensions for
> lite just means you lose all colourspace, colour standard control.
>
> This then leaves you with UYVY and YV12 - which I would change to
> profiles(OpenGL-YV12 and OpenGL-UYVY) rather than settings per se.
Regarding the settings - the code was looking at environment variables
to decide if either of these was to be disabled. I added the settings
recently because there seemed no easy way to pass in environment
variables with android.
> Makes the interface cleaner (no settings) and you essentially have one
> profile that is more CPU/memory transfer costly (UYVY) and one that is
> more GPU costly (YV12). Users can then decide which works best for
> their system.
>
Sounds good - at some point we need to add support for 10-bit color,
which maybe would be a third option?
> The 'extra filter stage' could no doubt go (if still needed) after
> UYVY changes.
>
I added a setting to extra filter stage because on fire tv 4k, randomly,
playback would just show a black screen. Adding that extra stage solved
it, but I do not understand what was happening. If I pulled up an OSD
display, I could see the video playing through the transparent parts of
the OSD, but the rest of the screen remained black.
> thoughts?
>

This is mostly way over my head, but it sounds good.

My recommendation is that we should only commit this into master after
we have created the fixes/30 branch in January. The changes sound major
and they could potentially cause some unexpected issues with some
configurations. I would not want the users who install from Ubuntu 19.04
to encounter problems.
Re: Playback next steps [ In reply to ]
On Mon, Dec 17, 2018 at 11:19:23AM -0500, Peter Bennett wrote:
> On 12/17/18 6:21 AM, Mark Kendall wrote:
> >
> > --- On Fri, 14/12/18, David Engel <david@istwok.net
> > <mailto:david@istwok.net>> wrote:
> >
> > > I didn't
> > > realize mythavtest was rotting.? I thought it was a
> > > "simpler"
> > > wrapper for
> > > tv_play/mythplayer without the rest of the frontend.? Is
> > > that not the case?
> >
> >
> > No - it isn't rotting, I just extended it a little - for double rate
> > deinterlacing and gpu decoder usage.
> > I wrote mythavtest to just play video at the fastest possible rate
> > (using the current display profile) with no audio and no a/v sync. You
> > need to turn off any sync to vblank that the driver may be using. When
> > using opengl, there are also a number of environment variables that you
> > can set to disable various functionality to test different code paths.

I see.

> > The 'extra filter stage' could no doubt go (if still needed) after UYVY
> > changes.
> >
> I added a setting to extra filter stage because on fire tv 4k, randomly,
> playback would just show a black screen. Adding that extra stage solved it,
> but I do not understand what was happening. If I pulled up an OSD display, I
> could see the video playing through the transparent parts of the OSD, but
> the rest of the screen remained black.

Peter, have you gotten and tried the 4k update yet? I read that
started rolling out to the public over the weekend.

> > thoughts?
> >
>
> This is mostly way over my head, but it sounds good.
>
> My recommendation is that we should only commit this into master after we
> have created the fixes/30 branch in January. The changes sound major and
> they could potentially cause some unexpected issues with some
> configurations. I would not want the users who install from Ubuntu 19.04 to
> encounter problems.

As I noted elsewhere, Mark's fixes should still go in (as long as
they're not too invasive and risky). Any major work should wait.
Mark, if you're still feeling ambitious, create a branch named
devel/<whatever> and commit there until the release branch is cut.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
On Mon, 17 Dec 2018 at 16:20, Peter Bennett <pb.mythtv@gmail.com> wrote:
> I think it is actually about two thirds of the way across the screen that it starts. Other details -
> The jaggies occur with both interlaced and progressive video.
> In file mythrender_opengl2es.h there is the line
> m_qualifiers = "precision highp float;\n";
> This was previously set as mediump - and the jaggies occurred on the fire TV and fire TV 4k.
> After I changed this to highp -
> On fire TV there was an error message that highp is not supported and it is reverting to mediump, and the jaggies remain
> On fire TV 4K the jaggies went away and the picture is perfect

There is an ifdef that needs to go into the shaders - platforms that
support highp set a define. Should fix the error message.

I don't think there is a way around the sampling issue with the UYVY
code as it stands. Once precision is lost the shader starts to select
the wrong texture sample - and because 2 pixels are packed into one
sample, the sampling is highly inaccurate. It will be less obvious in
the other methods that don't use the extra packing - but will still
be an issue.

> This is mostly way over my head, but it sounds good.
>
> My recommendation is that we should only commit this into master after we have created the fixes/30 branch in January. The changes sound major and they could potentially cause some unexpected issues with some configurations. I would not want the users who install from Ubuntu 19.04 to encounter problems.

I have a patch for the UYVY code ready to go. It will fix the
interlaced chroma issue and the precision problem. It is not at all
invasive. If OpenGLVideo currently works without YV12 or UYVY, it will
work after the change. It effectively removes the custom UYVY code. We
still use UYVY but it is not as densely packed - so there is a larger
texture in video memory but no sampling problem and no need for the
extra filter stage.
Regards
Mark
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
On Mon, 17 Dec 2018 at 20:02, David Engel <david@istwok.net> wrote:
> As I noted elsewhere, Mark's fixes should still go in (as long as
> they're not too invasive and risky). Any major work should wait.
> Mark, if you're still feeling ambitious, create a branch named
> devel/<whatever> and commit there until the release branch is cut.

Removing the custom UYVY packing is straightforward and fixes 2 subtle issues.
Mark
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
On 12/17/18 3:20 PM, Mark Kendall wrote:
> On Mon, 17 Dec 2018 at 16:20, Peter Bennett <pb.mythtv@gmail.com> wrote:
>> I think it is actually about two thirds of the way across the screen that it starts. Other details -
>> The jaggies occur with both interlaced and progressive video.
>> In file mythrender_opengl2es.h there is the line
>> m_qualifiers = "precision highp float;\n";
>> This was previously set as mediump - and the jaggies occurred on the fire TV and fire TV 4k.
>> After I changed this to highp -
>> On fire TV there was an error message that highp is not supported and it is reverting to mediump, and the jaggies remain
>> On fire TV 4K the jaggies went away and the picture is perfect
> There is an ifdef that needs to go into the shaders - platforms that
> support highp set a define. Should fix the error message.
>
> I don't think there is a way around the sampling issue with the UYVY
> code as it stands. Once precision is lost the shader starts to select
> the wrong texture sample - and because 2 pixels are packed into one
> sample, the sampling is highly inaccurate. It will be less obvious in
> the other methods that don't use the extra packing - but will still
> be an issue.
>
>> This is mostly way over my head, but it sounds good.
>>
>> My recommendation is that we should only commit this into master after we have created the fixes/30 branch in January. The changes sound major and they could potentially cause some unexpected issues with some configurations. I would not want the users who install from Ubuntu 19.04 to encounter problems.
> I have a patch for the UYVY code ready to go. It will fix the
> interlaced chroma issue and the precision problem. It is not at all
> invasive. If OpenGLVideo currently works without YV12 or UYVY, it will
> work after the change. It effectively removes the custom UYVY code. We
> still use UYVY but it is not as densely packed - so there is a larger
> texture in video memory but no sampling problem and no need for the
> extra filter stage.
> Regards
> Mark
> _______________________________________________
>
Welcome back!

It is OK with me if are confident and you want to commit it now. Let us
know when you have either a patch or have committed it so that I can do
some testing.

Regards
Peter
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
On 12/18/2018 7:40 AM, Peter Bennett wrote:
>
>
> On 12/17/18 3:20 PM, Mark Kendall wrote:
>> On Mon, 17 Dec 2018 at 16:20, Peter Bennett <pb.mythtv@gmail.com> wrote:
>>> I think it is actually about two thirds of the way across the screen
>>> that it starts. Other details -
>>> The jaggies occur with both interlaced and progressive video.
>>> In file mythrender_opengl2es.h there is the line
>>> m_qualifiers = "precision highp float;\n";
>>> This was previously set as mediump - and the jaggies occurred on the
>>> fire TV  and fire TV 4k.
>>> After I changed this to highp -
>>> On fire TV there was an error message that highp is not supported
>>> and it is reverting to mediump, and the jaggies remain
>>> On fire TV 4K the jaggies went away and the picture is perfect
>> There is an ifdef that needs to go into the shaders - platforms that
>> support highp set a define. Should fix the error message.
>>
>> I don't think there is a way around the sampling issue with the UYVY
>> code as it stands. Once precision is lost the shader starts to select
>> the wrong texture sample - and because 2 pixels are packed into one
>> sample, the sampling is highly inaccurate. It will be less obvious in
>> the other methods that don't use the extra packing  - but will still
>> be an issue.
>>
>>> This is mostly way over my head, but it sounds good.
>>>
>>> My recommendation is that we should only commit this into master
>>> after we have created the fixes/30 branch in January. The changes
>>> sound major and they could potentially cause some unexpected issues
>>> with some configurations. I would not want the users who install
>>> from Ubuntu 19.04 to encounter problems.
>> I have a patch for the UYVY code ready to go. It will fix the
>> interlaced chroma issue and the precision problem. It is not at all
>> invasive. If OpenGLVideo currently works without YV12 or UYVY, it will
>> work after the change. It effectively removes the custom UYVY code. We
>> still use UYVY but it is not as densely packed - so there is a larger
>> texture in video memory but no sampling problem and no need for the
>> extra filter stage.
>> Regards
>> Mark
>> _______________________________________________
>>
> Welcome back!
>
> It is OK with me if are confident and you want to commit it now. Let
> us know when you have either a patch or have committed it so that I
> can do some testing.

Looking forward to your GL insights Mark. It's very confusing when you
dont have the model in your head.

I saw shader issues too but I never had the guts to commit changes. I am
still running some android only local shader GL patches.

Mark

_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
On 12/17/18 3:01 PM, David Engel wrote:
> Peter, have you gotten and tried the 4k update yet? I read that
> started rolling out to the public over the weekend.
I checked my fire stick 4K and it got an update on Dec 14th (Friday). I
don't know if that changed anything. It has been very reliable, we have
been watching for a few hours on it every evening and no problems. IMHO
it is a fine solution for a poor man's Shield.

Peter
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
On Mon, Dec 17, 2018 at 08:20:57PM +0000, Mark Kendall wrote:
> I have a patch for the UYVY code ready to go. It will fix the
> interlaced chroma issue and the precision problem. It is not at all
> invasive. If OpenGLVideo currently works without YV12 or UYVY, it will
> work after the change. It effectively removes the custom UYVY code. We
> still use UYVY but it is not as densely packed - so there is a larger
> texture in video memory but no sampling problem and no need for the
> extra filter stage.

Mark, the colors are now messed up on my nvidia shield and firetv
stick 4k when the YV12 option is not enabled. It looks like the red
and blue are reversed. I suspect commit 43b64d5c. The colors are
correct when YV12 is enabled and on Linux regardless of the YV12
setting.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
On Wed, Dec 19, 2018, 2:52 AM David Engel <david@istwok.net wrote:

> On Mon, Dec 17, 2018 at 08:20:57PM +0000, Mark Kendall wrote:
> > I have a patch for the UYVY code ready to go. It will fix the
> > interlaced chroma issue and the precision problem. It is not at all
> > invasive. If OpenGLVideo currently works without YV12 or UYVY, it will
> > work after the change. It effectively removes the custom UYVY code. We
> > still use UYVY but it is not as densely packed - so there is a larger
> > texture in video memory but no sampling problem and no need for the
> > extra filter stage.
>
> Mark, the colors are now messed up on my nvidia shield and firetv
> stick 4k when the YV12 option is not enabled. It looks like the red
> and blue are reversed. I suspect commit 43b64d5c. The colors are
> correct when YV12 is enabled and on Linux regardless of the YV12
> setting.
>
> David
> --
>

Sounds like the UV planes are swapped.

Must be an issue with the non-mmx packing code - which is the only thing I
didn't think to check.

Or perhaps an endianness issue?

Sitting on tarmac waiting for delayed flight to leave and won't be back
until after Christmas - so can't look at it for a few days.

Regards
Mark

>
Re: Playback next steps [ In reply to ]
On Wed, Dec 19, 2018 at 07:10:31AM +0000, Mark Kendall wrote:
> On Wed, Dec 19, 2018, 2:52 AM David Engel <david@istwok.net wrote:
>
> > On Mon, Dec 17, 2018 at 08:20:57PM +0000, Mark Kendall wrote:
> > > I have a patch for the UYVY code ready to go. It will fix the
> > > interlaced chroma issue and the precision problem. It is not at all
> > > invasive. If OpenGLVideo currently works without YV12 or UYVY, it will
> > > work after the change. It effectively removes the custom UYVY code. We
> > > still use UYVY but it is not as densely packed - so there is a larger
> > > texture in video memory but no sampling problem and no need for the
> > > extra filter stage.
> >
> > Mark, the colors are now messed up on my nvidia shield and firetv
> > stick 4k when the YV12 option is not enabled. It looks like the red
> > and blue are reversed. I suspect commit 43b64d5c. The colors are
> > correct when YV12 is enabled and on Linux regardless of the YV12
> > setting.
> >
> > David
> > --
> >
>
> Sounds like the UV planes are swapped.

Yes, what I figured.

> Must be an issue with the non-mmx packing code - which is the only thing I
> didn't think to check.

Also waht I figured.

> Or perhaps an endianness issue?

All indications are the Shield is little endian so that's probably not
it.

> Sitting on tarmac waiting for delayed flight to leave and won't be back
> until after Christmas - so can't look at it for a few days.

I'll see if I can figure it out.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
On Wed, Dec 19, 2018 at 10:02:35AM -0600, David Engel wrote:
> On Wed, Dec 19, 2018 at 07:10:31AM +0000, Mark Kendall wrote:
> > On Wed, Dec 19, 2018, 2:52 AM David Engel <david@istwok.net wrote:
> >
> > > On Mon, Dec 17, 2018 at 08:20:57PM +0000, Mark Kendall wrote:
> > > > I have a patch for the UYVY code ready to go. It will fix the
> > > > interlaced chroma issue and the precision problem. It is not at all
> > > > invasive. If OpenGLVideo currently works without YV12 or UYVY, it will
> > > > work after the change. It effectively removes the custom UYVY code. We
> > > > still use UYVY but it is not as densely packed - so there is a larger
> > > > texture in video memory but no sampling problem and no need for the
> > > > extra filter stage.
> > >
> > > Mark, the colors are now messed up on my nvidia shield and firetv
> > > stick 4k when the YV12 option is not enabled. It looks like the red
> > > and blue are reversed. I suspect commit 43b64d5c. The colors are
> > > correct when YV12 is enabled and on Linux regardless of the YV12
> > > setting.
> > >
> > > David
> > > --
> > >
> >
> > Sounds like the UV planes are swapped.
>
> Yes, what I figured.
>
> > Must be an issue with the non-mmx packing code - which is the only thing I
> > didn't think to check.
>
> Also waht I figured.
>
> > Or perhaps an endianness issue?
>
> All indications are the Shield is little endian so that's probably not
> it.
>
> > Sitting on tarmac waiting for delayed flight to leave and won't be back
> > until after Christmas - so can't look at it for a few days.
>
> I'll see if I can figure it out.

I don't see the obvious fix. My best guess is something is not
accounting for GL_BGRA being set to GL_RGBA because the former is
missing in Android's gl.h.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
On 12/19/18 3:45 PM, David Engel wrote:
> On Wed, Dec 19, 2018 at 10:02:35AM -0600, David Engel wrote:
>> On Wed, Dec 19, 2018 at 07:10:31AM +0000, Mark Kendall wrote:
>>> On Wed, Dec 19, 2018, 2:52 AM David Engel <david@istwok.net wrote:
>>>
>>>> On Mon, Dec 17, 2018 at 08:20:57PM +0000, Mark Kendall wrote:
>>>>> I have a patch for the UYVY code ready to go. It will fix the
>>>>> interlaced chroma issue and the precision problem. It is not at all
>>>>> invasive. If OpenGLVideo currently works without YV12 or UYVY, it will
>>>>> work after the change. It effectively removes the custom UYVY code. We
>>>>> still use UYVY but it is not as densely packed - so there is a larger
>>>>> texture in video memory but no sampling problem and no need for the
>>>>> extra filter stage.
>>>> Mark, the colors are now messed up on my nvidia shield and firetv
>>>> stick 4k when the YV12 option is not enabled. It looks like the red
>>>> and blue are reversed. I suspect commit 43b64d5c. The colors are
>>>> correct when YV12 is enabled and on Linux regardless of the YV12
>>>> setting.
>>>>
>>>> David
>>>> --
>>>>
>>> Sounds like the UV planes are swapped.
>> Yes, what I figured.
>>
>>> Must be an issue with the non-mmx packing code - which is the only thing I
>>> didn't think to check.
>> Also waht I figured.
>>
>>> Or perhaps an endianness issue?
>> All indications are the Shield is little endian so that's probably not
>> it.
>>
>>> Sitting on tarmac waiting for delayed flight to leave and won't be back
>>> until after Christmas - so can't look at it for a few days.
>> I'll see if I can figure it out.
> I don't see the obvious fix. My best guess is something is not
> accounting for GL_BGRA being set to GL_RGBA because the former is
> missing in Android's gl.h.
>
> David

I have not tried these latest fixes. However, it has always been the
case that if you uncheck both YV12 and UVYV it defaults to BGRA and
displays mangled colors in android. Perhaps this is related to the new
problem.

Peter
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Playback next steps [ In reply to ]
On Wed, Dec 19, 2018 at 04:51:31PM -0500, Peter Bennett wrote:
>
>
> On 12/19/18 3:45 PM, David Engel wrote:
> > On Wed, Dec 19, 2018 at 10:02:35AM -0600, David Engel wrote:
> > > On Wed, Dec 19, 2018 at 07:10:31AM +0000, Mark Kendall wrote:
> > > > On Wed, Dec 19, 2018, 2:52 AM David Engel <david@istwok.net wrote:
> > > >
> > > > > On Mon, Dec 17, 2018 at 08:20:57PM +0000, Mark Kendall wrote:
> > > > > > I have a patch for the UYVY code ready to go. It will fix the
> > > > > > interlaced chroma issue and the precision problem. It is not at all
> > > > > > invasive. If OpenGLVideo currently works without YV12 or UYVY, it will
> > > > > > work after the change. It effectively removes the custom UYVY code. We
> > > > > > still use UYVY but it is not as densely packed - so there is a larger
> > > > > > texture in video memory but no sampling problem and no need for the
> > > > > > extra filter stage.
> > > > > Mark, the colors are now messed up on my nvidia shield and firetv
> > > > > stick 4k when the YV12 option is not enabled. It looks like the red
> > > > > and blue are reversed. I suspect commit 43b64d5c. The colors are
> > > > > correct when YV12 is enabled and on Linux regardless of the YV12
> > > > > setting.
> > > > >
> > > > > David
> > > > > --
> > > > >
> > > > Sounds like the UV planes are swapped.
> > > Yes, what I figured.
> > >
> > > > Must be an issue with the non-mmx packing code - which is the only thing I
> > > > didn't think to check.
> > > Also waht I figured.
> > >
> > > > Or perhaps an endianness issue?
> > > All indications are the Shield is little endian so that's probably not
> > > it.
> > >
> > > > Sitting on tarmac waiting for delayed flight to leave and won't be back
> > > > until after Christmas - so can't look at it for a few days.
> > > I'll see if I can figure it out.
> > I don't see the obvious fix. My best guess is something is not
> > accounting for GL_BGRA being set to GL_RGBA because the former is
> > missing in Android's gl.h.
> >
> > David
>
> I have not tried these latest fixes. However, it has always been the case
> that if you uncheck both YV12 and UVYV it defaults to BGRA and displays
> mangled colors in android. Perhaps this is related to the new problem.

It's probably the same long standing problem. The latest change
removes the UYVY option and causes the default format to be BGRA. I
wonder if defaulting to RGBA would work. Is there an advantage to
using BGRA when available instead of RGBA?

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org

1 2  View All