Mailing List Archive

Re: YV12 problem
On 12/11/18 3:13 PM, Peter Bennett wrote:
> Hi David
>
> Please can you point me to the patches that you are planning to
> install, or specific instructions on which commits you will reverse,
> so that I can do a test.
>
> Here are close-up photos of the screen from current master with UYVY
> and YV12 when using fire stick (non-4k) or "mi box s" or other devices
> that do not support high precision opengl es.
>
> Screen using YV12 checked ---> https://imgur.com/lbfyEWQ
> Screen using YV12 unchecked (i.e. using UYVY) --->
> https://imgur.com/dLoMUau
>
> If you send me your patches I will test with those and see how the
> picture looks.
>
> Peter

I have tried your patch on ticket #13358. It produces the jagged result
(https://imgur.com/dLoMUau) on the Fire TV (non-4k), without any ability
to fix it.

I agree that the code you are removing is flawed, as stated by Mark. I
recommend that we change the default for the "enable YV12" setting  to
be "false" for everybody, which bypasses that flawed code and achieves
the same effect as removing it. The users of fire tv non-4k, mi box s,
or other android devices that do not support high precision can still
enable that check box to fix the jaggies, albeit with flawed code that
does not properly support kernel deinterlace. However, using Linear
blend deinterlace with YV12 is better than looking at that jagged picture.

If somebody can come up with better code that supports these android
devices without the jaggies, we can then remove that faulty YV12 code.

If my recommendation is not taken and the code is removed, then we need
to remove the setting from the frontend setup for enabling/disabling
YV12, since that will no longer have any effect. I will also need to
update the wiki to remove the recommendations around YV12 for android.

Peter
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
On Tue, Dec 11, 2018 at 05:27:21PM -0500, Peter Bennett wrote:
> On 12/11/18 3:13 PM, Peter Bennett wrote:
> > Hi David
> >
> > Please can you point me to the patches that you are planning to install,
> > or specific instructions on which commits you will reverse, so that I
> > can do a test.
> >
> > Here are close-up photos of the screen from current master with UYVY and
> > YV12 when using fire stick (non-4k) or "mi box s" or other devices that
> > do not support high precision opengl es.
> >
> > Screen using YV12 checked ---> https://imgur.com/lbfyEWQ
> > Screen using YV12 unchecked (i.e. using UYVY) --->
> > https://imgur.com/dLoMUau
> >
> > If you send me your patches I will test with those and see how the
> > picture looks.
> >
> > Peter
>
> I have tried your patch on ticket #13358. It produces the jagged result
> (https://imgur.com/dLoMUau) on the Fire TV (non-4k), without any ability to
> fix it.

I tested on my second generation firetv stick and first generation
firetv tonight with and without the reversio patch. I saw no jaggies
whatsoever.

The firetv 1g was quite usable but moreso with mediacodec decoding.
With ffmpeg decoding of mpeg2 video, 720p was fine but 1080i needed
bob or one-field deinterlacing to not stutter.

The firetv stick 2g wasn't as nice. I couldn't find any combination
of setting that didn't stutter. That's in line with what I've heard
and read about all firetv stick pre-4k -- they just aren't powerful
enought for MythTV.

Both devices worked reasonably well Kodi as a MythTV frontend.
Though, the firetv stick 2g didn't support any deinterlacing. That's
in line with my past experience -- Kodi is considerably less resource
intensive than MythTV.

> I agree that the code you are removing is flawed, as stated by Mark. I
> recommend that we change the default for the "enable YV12" setting? to be
> "false" for everybody, which bypasses that flawed code and achieves the same
> effect as removing it. The users of fire tv non-4k, mi box s, or other
> android devices that do not support high precision can still enable that
> check box to fix the jaggies, albeit with flawed code that does not properly
> support kernel deinterlace. However, using Linear blend deinterlace with
> YV12 is better than looking at that jagged picture.
>
> If somebody can come up with better code that supports these android devices
> without the jaggies, we can then remove that faulty YV12 code.
>
> If my recommendation is not taken and the code is removed, then we need to
> remove the setting from the frontend setup for enabling/disabling YV12,
> since that will no longer have any effect. I will also need to update the
> wiki to remove the recommendations around YV12 for android.

It's not just the kernel deinterlacer part, it's all of the yv12
support that I find suspicious knowing what I know now. Witness the
"green line" issue that is "fixed" by not using yv12. Maybe there's a
good reason for it but it tends to reinforce that that code is just
buggy.

Note that I'm not opposed to keeping the yv12 code, especially if it
is indeed more efficient. However, if that is to be done, I think the
code needs to be thoroughly re-reviewed and fixed as needed. That
includes fixing the yv12, kernel, deinterlacing part (preferred) or
removing it (not preferred but better than leaving in known buggy
code).

FYI, I uploaded separate, reversion patches for the two commits
currently considered in ticket #13358.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
YV12 (same as yuv420p) is 3 planes, nv12 is 2 planes (u and v planes interleaved).

Modifying the OGL deinterlacer to handle nv12 should be trivial. I don't know of any hardware decoder outputting 3 planes yuv.

Next month I'm back in Oz permanently where I have a working mythtv system. I'll be able to get back into coding a bit (plus I need a new hobby).
Happy to give a hand then.

Get Firefox on Android
________________________________
From: Peter Bennett
Sent: Tuesday, 11 December 2018 23:27
To: David Engel; Development of MythTV
Subject: Re: [mythtv] YV12 problem

On 12/11/18 3:13 PM, Peter Bennett wrote:
> Hi David
>
> Please can you point me to the patches that you are planning to
> install, or specific instructions on which commits you will reverse,
> so that I can do a test.
>
> Here are close-up photos of the screen from current master with UYVY
> and YV12 when using fire stick (non-4k) or "mi box s" or other devices
> that do not support high precision opengl es.
>
> Screen using YV12 checked ---> https://imgur.com/lbfyEWQ
> Screen using YV12 unchecked (i.e. using UYVY) --->
> https://imgur.com/dLoMUau
>
> If you send me your patches I will test with those and see how the
> picture looks.
>
> Peter

I have tried your patch on ticket #13358. It produces the jagged result
(https://imgur.com/dLoMUau) on the Fire TV (non-4k), without any ability
to fix it.

I agree that the code you are removing is flawed, as stated by Mark. I
recommend that we change the default for the "enable YV12" setting  to
be "false" for everybody, which bypasses that flawed code and achieves
the same effect as removing it. The users of fire tv non-4k, mi box s,
or other android devices that do not support high precision can still
enable that check box to fix the jaggies, albeit with flawed code that
does not properly support kernel deinterlace. However, using Linear
blend deinterlace with YV12 is better than looking at that jagged picture.

If somebody can come up with better code that supports these android
devices without the jaggies, we can then remove that faulty YV12 code.

If my recommendation is not taken and the code is removed, then we need
to remove the setting from the frontend setup for enabling/disabling
YV12, since that will no longer have any effect. I will also need to
update the wiki to remove the recommendations around YV12 for android.

Peter
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
Get Firefox on Android
________________________________
From: David Engel <david@istwok.net>
Sent: Wednesday, 12 December 2018 07:26
To: Peter Bennett
Cc: Development of MythTV
Subject: Re: [mythtv] YV12 problem


> If somebody can come up with better code that supports these android devices
> without the jaggies, we can then remove that faulty YV12 code.
>

If you remove yv12 support software decoder won't work any longer.
Now in the TV world that's probably fine, but do we really want not to play those files?
Re: YV12 problem [ In reply to ]
On Wed, Dec 12, 2018 at 08:55:22AM +0100, Jean-Yves Avenard wrote:
> From: David Engel <david@istwok.net>
> Sent: Wednesday, 12 December 2018 07:26
> To: Peter Bennett
> Cc: Development of MythTV
> Subject: Re: [mythtv] YV12 problem
>
>
> > If somebody can come up with better code that supports these android devices
> > without the jaggies, we can then remove that faulty YV12 code.
> >
>
> If you remove yv12 support software decoder won't work any longer.
> Now in the TV world that's probably fine, but do we really want not to play those files?

Please explain. Without more context and being mostly ignorant of
pixel format implilcations (though, I'm trying to correct that), I
don't know what to make of this statement.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
On Wed, Dec 12, 2018 at 08:51:56AM +0100, Jean-Yves Avenard wrote:
> YV12 (same as yuv420p) is 3 planes, nv12 is 2 planes (u and v planes interleaved).
>
> Modifying the OGL deinterlacer to handle nv12 should be trivial. I don't know of any hardware decoder outputting 3 planes yuv.

You touch on a question I was going to ask. That is why does kernel
deinterlacing need an entirely new shader program for yv12 while
linear blend doesn't seem to need a new one?

> Next month I'm back in Oz permanently where I have a working mythtv system. I'll be able to get back into coding a bit (plus I need a new hobby).
> Happy to give a hand then.

Change in job status? Desired or undesired change?

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
On 12/12/18 1:26 AM, David Engel wrote:
> I tested on my second generation firetv stick and first generation
> firetv tonight with and without the reversio patch. I saw no jaggies
> whatsoever.
The jaggies occur on the right hand half of the picture. They may not be
noticeable depending on the content you are watching. This happens in
1080i and 720p, but they are more visible in 720p content. The right
hand one third or one half of the screen has half resolution, due to
insufficient precision in opengl es when applying the luminance via the
shader.

Please try this video and look at the fox sports logo. I am using fire
stick 2g. This is progressive 720p content, and I use mediacodec decoding.

https://www.dropbox.com/s/e98zdwdgnpt5vav/panning.ts?dl=0


> The firetv 1g was quite usable but moreso with mediacodec decoding.
> With ffmpeg decoding of mpeg2 video, 720p was fine but 1080i needed
> bob or one-field deinterlacing to not stutter.
>
> The firetv stick 2g wasn't as nice. I couldn't find any combination
> of setting that didn't stutter. That's in line with what I've heard
> and read about all firetv stick pre-4k -- they just aren't powerful
> enought for MythTV.
>
Fire stick 2g has jerky movement due to a reduced frame rate (many
frames are dropped). The audio is smooth without interruption, audio
video sync is fine and playback keeps up. There are some people who are
happy to watch it this way, although I would not like the jerky
movement. I would rather not add jaggies to the mix if possible.

I use the video, advanced, avsync2, standard decoding for mpeg2 with
linear blend deinterlace, and mediacodec for non-mpeg2.

Peter

_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
On Wed, Dec 12, 2018 at 12:48:08PM -0500, Peter Bennett wrote:
>
>
> On 12/12/18 1:26 AM, David Engel wrote:
> > I tested on my second generation firetv stick and first generation
> > firetv tonight with and without the reversio patch. I saw no jaggies
> > whatsoever.
> The jaggies occur on the right hand half of the picture. They may not be
> noticeable depending on the content you are watching. This happens in 1080i
> and 720p, but they are more visible in 720p content. The right hand one
> third or one half of the screen has half resolution, due to insufficient
> precision in opengl es when applying the luminance via the shader.
>
> Please try this video and look at the fox sports logo. I am using fire stick
> 2g. This is progressive 720p content, and I use mediacodec decoding.
>
> https://www.dropbox.com/s/e98zdwdgnpt5vav/panning.ts?dl=0
>
>
> > The firetv 1g was quite usable but moreso with mediacodec decoding.
> > With ffmpeg decoding of mpeg2 video, 720p was fine but 1080i needed
> > bob or one-field deinterlacing to not stutter.
> >
> > The firetv stick 2g wasn't as nice. I couldn't find any combination
> > of setting that didn't stutter. That's in line with what I've heard
> > and read about all firetv stick pre-4k -- they just aren't powerful
> > enought for MythTV.
> >
> Fire stick 2g has jerky movement due to a reduced frame rate (many frames
> are dropped). The audio is smooth without interruption, audio video sync is
> fine and playback keeps up. There are some people who are happy to watch it
> this way, although I would not like the jerky movement. I would rather not
> add jaggies to the mix if possible.
>
> I use the video, advanced, avsync2, standard decoding for mpeg2 with linear
> blend deinterlace, and mediacodec for non-mpeg2.

I will try to try the stick 2g again tonight.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
Hi

On Wed, 12 Dec 2018 at 5:04 pm, David Engel <david@istwok.net> wrote:

> On Wed, Dec 12, 2018 at 08:51:56AM +0100, Jean-Yves Avenard wrote:
> > YV12 (same as yuv420p) is 3 planes, nv12 is 2 planes (u and v planes
> interleaved).
> >
> > Modifying the OGL deinterlacer to handle nv12 should be trivial. I don't
> know of any hardware decoder outputting 3 planes yuv.
>
> You touch on a question I was going to ask. That is why does kernel
> deinterlacing need an entirely new shader program for yv12 while
> linear blend doesn't seem to need a new one?


The data between the two is the same, just stored differently. It depends
on what backend it uses for storage, but really the difference should 1 or
2 arithmetic operation different in how to access the pixel data.


>
> > Next month I'm back in Oz permanently where I have a working mythtv
> system. I'll be able to get back into coding a bit (plus I need a new
> hobby).
> > Happy to give a hand then.
>
> Change in job status? Desired or undesired change?


Same job, just that my Aussie wife wanted to go back to Oz.

I work from home, so doesn't change much otherwise.
Though won't have to stay super late to do meeting with the US anymore, now
that's nice.
Re: YV12 problem [ In reply to ]
Hi

On Wed, 12 Dec 2018 at 5:02 pm, David Engel <david@istwok.net> wrote:

> On Wed, Dec 12, 2018 at 08:55:22AM +0100, Jean-Yves Avenard wrote:
> > From: David Engel <david@istwok.net>
> > Sent: Wednesday, 12 December 2018 07:26
> > To: Peter Bennett
> > Cc: Development of MythTV
> > Subject: Re: [mythtv] YV12 problem
> >
> >
> > > If somebody can come up with better code that supports these android
> devices
> > > without the jaggies, we can then remove that faulty YV12 code.
> > >
> >
> > If you remove yv12 support software decoder won't work any longer.
> > Now in the TV world that's probably fine, but do we really want not to
> play those files?
>
> Please explain. Without more context and being mostly ignorant of
> pixel format implilcations (though, I'm trying to correct that), I
> don't know what to make of this statement


TV broadcasts use exclusively mpeg2, h264 and h265.
Most embedded system like the fire2 have a hardware decoder for those.

So what you get out of the decoder will be a GPU based nv12 image.

For other codecs like say vp8, vp9, av1: you have to use a software decoder
and they will output yuv420 (if 8 bits).

I was just saying earlier that dropping yuv420, means you'll have to do a
conversion to nv12 right outside the decoder, so an extra memory allocation
and unnecessary copy.

All when converting any nv12 shader to do yv12 is trivial; you could even
use the same code for both.

BTW, the default browser by default on the fire2 is Firefox :)

Story goes that Bezos contacted our director personally asking if we could
port it in 2 weeks. The challenge was accepted.


>
Re: YV12 problem [ In reply to ]
On 12/12/18 5:49 PM, Jean-Yves Avenard wrote:
> TV broadcasts use exclusively mpeg2, h264 and h265.
> Most embedded system like the fire2 have a hardware decoder for those.
>
> So what you get out of the decoder will be a GPU based nv12 image.
>
> For other codecs like say vp8, vp9, av1: you have to use a software
> decoder and they will output yuv420 (if 8 bits).
>
> I was just saying earlier that dropping yuv420, means you'll have to
> do a conversion to nv12 right outside the decoder, so an extra memory
> allocation and unnecessary copy.
>
> All when converting any nv12 shader to do yv12 is trivial; you could
> even use the same code for both.
>
Jya - some quick notes on what i have been up to -

I have added MythTV code to decode using mediacodec via FFmpeg, also new
code to support vaapi with deinterlacing (called vaapi2 in MythTV) and I
am working on nvdec. However, I need to implement direct output from
decoder to video. Currently for all of those I have added it is decoding
to memory and then using the existing MythTV OpenGL to render. This is
not fast enough for 4K video. I will have to learn how to do the direct
output from decode to OpenGL.

One problem with mediacodec decoding is that in most devices it does not
do deinterlacing and it does not pass MythTV the indicator to say video
is interlaced. This forces me to use software decoding for mpeg2 so that
we can detect the interlace and use the OpenGL deinterlacer.

On some devices (e.g. fire stick g2), the MythTV OpenGL implementation
is not fast enough to display 30 fps, so we are dropping frames. I
believe that the OpenGL processing we use is too much, causing the
slowdown. I believe we need a lightweight OpenGL render that renders the
images without all the filters we normally use. The decoding part of it
seems to be fast enough, audio and video sync nicely, just the video is
jerky becuase of the dropped frames.

I need to spend some time learning OpenGL so that I can figure this all out.

Any help or advice would be welcome.

Peter
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
On Wed, Dec 12, 2018 at 12:48:08PM -0500, Peter Bennett wrote:
>
>
> On 12/12/18 1:26 AM, David Engel wrote:
> > I tested on my second generation firetv stick and first generation
> > firetv tonight with and without the reversio patch. I saw no jaggies
> > whatsoever.
> The jaggies occur on the right hand half of the picture. They may not be
> noticeable depending on the content you are watching. This happens in 1080i
> and 720p, but they are more visible in 720p content. The right hand one
> third or one half of the screen has half resolution, due to insufficient
> precision in opengl es when applying the luminance via the shader.
>
> Please try this video and look at the fox sports logo. I am using fire stick
> 2g. This is progressive 720p content, and I use mediacodec decoding.
>
> https://www.dropbox.com/s/e98zdwdgnpt5vav/panning.ts?dl=0

I saw them tonight. Yeah, that's not nice.

I'm still not sure the stick 2g is a good choice for MythTV. Maybe
when the video path is all in the GPU but not right now.

David

> > The firetv 1g was quite usable but moreso with mediacodec decoding.
> > With ffmpeg decoding of mpeg2 video, 720p was fine but 1080i needed
> > bob or one-field deinterlacing to not stutter.
> >
> > The firetv stick 2g wasn't as nice. I couldn't find any combination
> > of setting that didn't stutter. That's in line with what I've heard
> > and read about all firetv stick pre-4k -- they just aren't powerful
> > enought for MythTV.
> >
> Fire stick 2g has jerky movement due to a reduced frame rate (many frames
> are dropped). The audio is smooth without interruption, audio video sync is
> fine and playback keeps up. There are some people who are happy to watch it
> this way, although I would not like the jerky movement. I would rather not
> add jaggies to the mix if possible.
>
> I use the video, advanced, avsync2, standard decoding for mpeg2 with linear
> blend deinterlace, and mediacodec for non-mpeg2.
>
> Peter
>

--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
On Wed, Dec 12, 2018 at 11:49:27PM +0100, Jean-Yves Avenard wrote:
> Hi
>
> On Wed, 12 Dec 2018 at 5:02 pm, David Engel <david@istwok.net> wrote:
>
> > On Wed, Dec 12, 2018 at 08:55:22AM +0100, Jean-Yves Avenard wrote:
> > > From: David Engel <david@istwok.net>
> > > Sent: Wednesday, 12 December 2018 07:26
> > > To: Peter Bennett
> > > Cc: Development of MythTV
> > > Subject: Re: [mythtv] YV12 problem
> > >
> > >
> > > > If somebody can come up with better code that supports these android
> > devices
> > > > without the jaggies, we can then remove that faulty YV12 code.
> > > >
> > >
> > > If you remove yv12 support software decoder won't work any longer.
> > > Now in the TV world that's probably fine, but do we really want not to
> > play those files?
> >
> > Please explain. Without more context and being mostly ignorant of
> > pixel format implilcations (though, I'm trying to correct that), I
> > don't know what to make of this statement
>
>
> TV broadcasts use exclusively mpeg2, h264 and h265.
> Most embedded system like the fire2 have a hardware decoder for those.
>
> So what you get out of the decoder will be a GPU based nv12 image.
>
> For other codecs like say vp8, vp9, av1: you have to use a software decoder
> and they will output yuv420 (if 8 bits).
>
> I was just saying earlier that dropping yuv420, means you'll have to do a
> conversion to nv12 right outside the decoder, so an extra memory allocation
> and unnecessary copy.
>
> All when converting any nv12 shader to do yv12 is trivial; you could even
> use the same code for both.

I'm confused again. I thought we were talking about yv12 but you're
talking mostly about nv12. I know there are a plethora of pixel
formats but it's still mostly Greek alphabet soup to me until I get
further up to speed. If the hardware decoders generate nv12, how does
yv12 fit in? Is it a format that has to be converted to on the way to
output or something else?

> BTW, the default browser by default on the fire2 is Firefox :)
>
> Story goes that Bezos contacted our director personally asking if we could
> port it in 2 weeks. The challenge was accepted.

:) That's very humorous to me. Look at the domain name in my email
address. It's short for "is two weeks okay?" It was the answer we in
engineering were expected to give where I used to (and again now) work
when the owner/boss asked us how long it would take us to do anything.

[From another reply but it fits better here.]

> You touch on a question I was going to ask. That is why does kernel
> deinterlacing need an entirely new shader program for yv12 while
> linear blend doesn't seem to need a new one?

The data between the two is the same, just stored differently. It depends
on what backend it uses for storage, but really the difference should 1 or
2 arithmetic operation different in how to access the pixel data.

I think that's kind of what I figured. The pixel format is already
known for the textures and handled appropriately, right? The
algorithm of which pixels from which lines/testures to mix with other
pixels remains the same.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
On Wed, Dec 12, 2018 at 11:40:31PM +0100, Jean-Yves Avenard wrote:
> On Wed, 12 Dec 2018 at 5:04 pm, David Engel <david@istwok.net> wrote:
> > > Next month I'm back in Oz permanently where I have a working mythtv
> > system. I'll be able to get back into coding a bit (plus I need a new
> > hobby).
> > > Happy to give a hand then.
> >
> > Change in job status? Desired or undesired change?
>
> Same job, just that my Aussie wife wanted to go back to Oz.
>
> I work from home, so doesn't change much otherwise.
> Though won't have to stay super late to do meeting with the US anymore, now
> that's nice.

Didn't you have to move back to France for that job initially or had
you already moved before you got it? I thought they both (job and
move) happened about the same time so I assumed they were connected in
some way.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
On Wed, Dec 12, 2018 at 06:54:32PM -0500, Peter Bennett wrote:
> On 12/12/18 5:49 PM, Jean-Yves Avenard wrote:
> > TV broadcasts use exclusively mpeg2, h264 and h265.
> > Most embedded system like the fire2 have a hardware decoder for those.
> >
> > So what you get out of the decoder will be a GPU based nv12 image.
> >
> > For other codecs like say vp8, vp9, av1: you have to use a software
> > decoder and they will output yuv420 (if 8 bits).
> >
> > I was just saying earlier that dropping yuv420, means you'll have to do
> > a conversion to nv12 right outside the decoder, so an extra memory
> > allocation and unnecessary copy.
> >
> > All when converting any nv12 shader to do yv12 is trivial; you could
> > even use the same code for both.
> >
> Jya - some quick notes on what i have been up to -
>
> I have added MythTV code to decode using mediacodec via FFmpeg, also new
> code to support vaapi with deinterlacing (called vaapi2 in MythTV) and I am
> working on nvdec. However, I need to implement direct output from decoder to
> video. Currently for all of those I have added it is decoding to memory and
> then using the existing MythTV OpenGL to render. This is not fast enough for
> 4K video. I will have to learn how to do the direct output from decode to
> OpenGL.

Have you looked at videoout_openglvaapi.cpp yet? What Mark described
mostly(*) made sense and seems like the way forward. Something very
similar to that seems like the way to go. Configure vaapi to decode
the frames into opengl memory. If hardware deiterlacing is chosen, it
gets done during decoding and simply display the resulting progressive
frames. If opengl deinterlacing is chosen, don't deinterlace during
decoding and do so in opengl if needed. The only loss is the ability
to use the software deinterlacers which really isn't a loss in my
opinion.

(*)I don't think Mark fully grasped that the deinterlacing could be
done automatically during decoding. Either that or he knows about
some other opengl relationship to vaapi of which I'm unaware.

> One problem with mediacodec decoding is that in most devices it does not do
> deinterlacing and it does not pass MythTV the indicator to say video is
> interlaced. This forces me to use software decoding for mpeg2 so that we can
> detect the interlace and use the OpenGL deinterlacer.

I thought it did give us an indication we just couldn't know before
hand until we actually tried it. It it doesn't deinterlace what do we
get back when we give it 1 interlaced frame? We either get back 1
frame or 2, right? Oh, are you talking about non-double rate
deinterlacing? Do we know if the frame is interlaced going in? If
so, seems like a job for YAFS (yet another fine setting) for the user
to tell us what to assume.

David

> On some devices (e.g. fire stick g2), the MythTV OpenGL implementation is
> not fast enough to display 30 fps, so we are dropping frames. I believe that
> the OpenGL processing we use is too much, causing the slowdown. I believe we
> need a lightweight OpenGL render that renders the images without all the
> filters we normally use. The decoding part of it seems to be fast enough,
> audio and video sync nicely, just the video is jerky becuase of the dropped
> frames.
>
> I need to spend some time learning OpenGL so that I can figure this all out.
>
> Any help or advice would be welcome.
>
> Peter
> _______________________________________________
> mythtv-dev mailing list
> mythtv-dev@mythtv.org
> http://lists.mythtv.org/mailman/listinfo/mythtv-dev
> http://wiki.mythtv.org/Mailing_List_etiquette
> MythTV Forums: https://forum.mythtv.org

--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
On Thu, 13 Dec 2018 at 04:28, David Engel <david@istwok.net> wrote:

> I'm confused again. I thought we were talking about yv12 but you're
> talking mostly about nv12. I know there are a plethora of pixel
> formats but it's still mostly Greek alphabet soup to me until I get
> further up to speed. If the hardware decoders generate nv12, how does
> yv12 fit in? Is it a format that has to be converted to on the way to
> output or something else?

the only difference between NV12 and YUV420/YV12 is that U and V are
interleaved for NV12.

So YV12 is stored as:
YYYY
UUUU
VVVV

and NV12 is:
YYYY
UVUVUVUV

that's it.
information is identical just stored differently.

YV12 fits in because all FFmpeg decoder outputs that format.
If you have no hardware decoder you will get YV12 out
If you use a hardware decoder you get NV12 out (with the buffer
actually in GPU memory)


> I think that's kind of what I figured. The pixel format is already
> known for the textures and handled appropriately, right? The
> algorithm of which pixels from which lines/testures to mix with other
> pixels remains the same.

You do need to tell the GPU what textures you are you are feeding it.
So with OpenGL, for YV12 you pass 3 textures buffer
NV12 you pass 2 textures buffer.

To show the difference on what the shader would be to access the YV12 pixel:
float3 yuv = float3(
tY.Sample(sSampler, aTexCoords).r,
tCb.Sample(sSampler, aTexCoords).r,
tCr.Sample(sSampler, aTexCoords).r);
return CalculateYCbCrColor(yuv * vCoefficient);

That's for nv12
float y = tY.Sample(sSampler, aTexCoords).r;
float2 cbcr = tCb.Sample(sSampler, aTexCoords).rg;
return CalculateYCbCrColor(float3(y, cbcr) * vCoefficient);

That's a D3D11 shader for Firefox OpenGL compositor (everything gets
converted to RGB before being composited)

You can see that for YV12 you access the data for U/V separately, and
for NV12, together. that's it. 1 line change.

If you were to do that in C, it would be:
YV12
Yvalue = Y[x] + y * strideY;
Uvalue = U[x/2] + y * strideU;
Vvalue = V[x/2] + y * strideV;

With strideU=strideV=strideY/2

For NV12:
Yvalue = Y[x] + y * strideY;
Uvalue = UV[int(x/2)*2] + y * strideUV;
Vvalue = VV[int(x/2)*2+1] + y * strideUV;

with strideUV = strideY
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
On Thu, 13 Dec 2018 at 04:31, David Engel <david@istwok.net> wrote:
> Didn't you have to move back to France for that job initially or had
> you already moved before you got it? I thought they both (job and
> move) happened about the same time so I assumed they were connected in
> some way.

I started in Oz, worked 2 years, then I asked if I could work from
France, they said okay, worked 2 years, and now I've asked to work
from Oz again, they said okay but this time you pay for the move.
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
Hi
On Thu, 13 Dec 2018 at 00:54, Peter Bennett <pb.mythtv@gmail.com> wrote:
> I have added MythTV code to decode using mediacodec via FFmpeg, also new
> code to support vaapi with deinterlacing (called vaapi2 in MythTV) and I
> am working on nvdec. However, I need to implement direct output from
> decoder to video. Currently for all of those I have added it is decoding
> to memory and then using the existing MythTV OpenGL to render. This is
> not fast enough for 4K video. I will have to learn how to do the direct
> output from decode to OpenGL.

so in my experience, doing hardware decoding then readback is slower
(sometimes much slower) then plain software decoding.
The readback is slow. It's not too bad with intel as the memory is
shared, but for nvidia (vdpau/nvdec) or amd (vaapi) it's terrible.

What I don't get however, is that a vaapi surface is in effect just
like a ogl one can be used as-is. Why can't you use the OGL compositor
there? that's what the older vaapi decoder was doing. You get an OGL
Image out.


>
> One problem with mediacodec decoding is that in most devices it does not
> do deinterlacing and it does not pass MythTV the indicator to say video
> is interlaced. This forces me to use software decoding for mpeg2 so that
> we can detect the interlace and use the OpenGL deinterlacer.

You should be able to determine if the content is interlaced or not
without decoding, not sure with mpeg2, but for h264 and hevc you
certainly can it's in the frame/stream header (in the SPS NAL for
h264). Other codecs like vp8, vp9 and av1 do not support interlacing
thank god that prehistoric thing will disappear.

>
> On some devices (e.g. fire stick g2), the MythTV OpenGL implementation
> is not fast enough to display 30 fps, so we are dropping frames. I
> believe that the OpenGL processing we use is too much, causing the
> slowdown. I believe we need a lightweight OpenGL render that renders the
> images without all the filters we normally use. The decoding part of it
> seems to be fast enough, audio and video sync nicely, just the video is
> jerky becuase of the dropped frames.

if you are doing a readback, this is where your slowless comes from,
almost guaranteed.
We do 1080p 60fps on the firestick 2 just fine with GeckoView. But we
do no readbacks at all. It's GPU hardware all the way with OGL
compositing. On android it's even easier as there's direct support for
NV12 surface.

>
> I need to spend some time learning OpenGL so that I can figure this all out.

OGL is on the way out, I assume here you want EGL which can then
interface with OpenGL IS. That's what you use with the OpenMax
decoder. With Android what you get is a graphic surface directly, with
an opaque shared handle that the android gfx can handle directly.
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
On Thu, 13 Dec 2018 at 04:49, David Engel <david@istwok.net> wrote:

> Have you looked at videoout_openglvaapi.cpp yet? What Mark described
> mostly(*) made sense and seems like the way forward. Something very
> similar to that seems like the way to go. Configure vaapi to decode
> the frames into opengl memory. If hardware deiterlacing is chosen, it
> gets done during decoding and simply display the resulting progressive
> frames. If opengl deinterlacing is chosen, don't deinterlace during
> decoding and do so in opengl if needed. The only loss is the ability
> to use the software deinterlacers which really isn't a loss in my
> opinion.
>
> (*)I don't think Mark fully grasped that the deinterlacing could be
> done automatically during decoding. Either that or he knows about
> some other opengl relationship to vaapi of which I'm unaware.

No, it's just that vaapi provided no deinterlacing interface at the
time Mark wrote the code, it came much later.

vaapi only provided Bob deinterlacing originally, and it wasn't done
during the decoding but on display.

In any case, if you are using vaapi or any other HW decoding
interface, you certainly don't want to use any software deinterlacer,
the cost for readbacks is way too great.
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
On 13/12/2018 10:36, Jean-Yves Avenard wrote:
>>
>> I need to spend some time learning OpenGL so that I can figure this all out.
>
> OGL is on the way out, I assume here you want EGL which can then
> interface with OpenGL IS. That's what you use with the OpenMax
> decoder. With Android what you get is a graphic surface directly, with
> an opaque shared handle that the android gfx can handle directly.

We should be able to do EGL on all supported platforms.

All the desktop implementations of OGL also support EGL profiles.


Regards
Stuart

_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
On 12/12/18 10:48 PM, David Engel wrote:
> On Wed, Dec 12, 2018 at 06:54:32PM -0500, Peter Bennett wrote:
>> On 12/12/18 5:49 PM, Jean-Yves Avenard wrote:
>>> TV broadcasts use exclusively mpeg2, h264 and h265.
>>> Most embedded system like the fire2 have a hardware decoder for those.
>>>
>>> So what you get out of the decoder will be a GPU based nv12 image.
>>>
>>> For other codecs like say vp8, vp9, av1: you have to use a software
>>> decoder and they will output yuv420 (if 8 bits).
>>>
>>> I was just saying earlier that dropping yuv420, means you'll have to do
>>> a conversion to nv12 right outside the decoder, so an extra memory
>>> allocation and unnecessary copy.
>>>
>>> All when converting any nv12 shader to do yv12 is trivial; you could
>>> even use the same code for both.
>>>
>> Jya - some quick notes on what i have been up to -
>>
>> I have added MythTV code to decode using mediacodec via FFmpeg, also new
>> code to support vaapi with deinterlacing (called vaapi2 in MythTV) and I am
>> working on nvdec. However, I need to implement direct output from decoder to
>> video. Currently for all of those I have added it is decoding to memory and
>> then using the existing MythTV OpenGL to render. This is not fast enough for
>> 4K video. I will have to learn how to do the direct output from decode to
>> OpenGL.
> Have you looked at videoout_openglvaapi.cpp yet? What Mark described
> mostly(*) made sense and seems like the way forward.
I looked at it briefly before I started the vaapi2 stuff. It is not easy
to understands and I figured it may be easier to start from scratch.
However to get the direct rendering I need to dig into it or look at
EGL, which is what JYA recommends.

> Something very
> similar to that seems like the way to go. Configure vaapi to decode
> the frames into opengl memory. If hardware deiterlacing is chosen, it
> gets done during decoding and simply display the resulting progressive
> frames. If opengl deinterlacing is chosen, don't deinterlace during
> decoding and do so in opengl if needed. The only loss is the ability
> to use the software deinterlacers which really isn't a loss in my
> opinion.
>
> (*)I don't think Mark fully grasped that the deinterlacing could be
> done automatically during decoding. Either that or he knows about
> some other opengl relationship to vaapi of which I'm unaware.
>
>> One problem with mediacodec decoding is that in most devices it does not do
>> deinterlacing and it does not pass MythTV the indicator to say video is
>> interlaced. This forces me to use software decoding for mpeg2 so that we can
>> detect the interlace and use the OpenGL deinterlacer.
> I thought it did give us an indication we just couldn't know before
> hand until we actually tried it. It it doesn't deinterlace what do we
> get back when we give it 1 interlaced frame? We either get back 1
> frame or 2, right?
We get back an interlaced frame, but the ffmpeg indicator that tells if
it is interlaced or not is "false" meaning it is not interlaced. That is
what normally turns on the deinterlacer in MythTV. I am not sure about
the bit format of the frame, from what I understand, interlaced frames
have two fields, arranged one after the other, like two half pictures. I
would have expected the output to be completely corrupted if the render
assumed it was one progressive frame when it was actually two interlaced
fields. There is something going on that I don't understand. Further
investigation is needed. Perhaps we can tell if it is interlaced.
> Oh, are you talking about non-double rate
> deinterlacing? Do we know if the frame is interlaced going in? If
> so, seems like a job for YAFS (yet another fine setting) for the user
> to tell us what to assume.
>
> David
>
>> On some devices (e.g. fire stick g2), the MythTV OpenGL implementation is
>> not fast enough to display 30 fps, so we are dropping frames. I believe that
>> the OpenGL processing we use is too much, causing the slowdown. I believe we
>> need a lightweight OpenGL render that renders the images without all the
>> filters we normally use. The decoding part of it seems to be fast enough,
>> audio and video sync nicely, just the video is jerky becuase of the dropped
>> frames.
>>
>> I need to spend some time learning OpenGL so that I can figure this all out.
>>
>> Any help or advice would be welcome.
>>
>> Peter
>> _______________________________________________
>> mythtv-dev mailing list
>> mythtv-dev@mythtv.org
>> http://lists.mythtv.org/mailman/listinfo/mythtv-dev
>> http://wiki.mythtv.org/Mailing_List_etiquette
>> MythTV Forums: https://forum.mythtv.org

_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
> We get back an interlaced frame, but the ffmpeg indicator that tells if
> it is interlaced or not is "false" meaning it is not interlaced. That is
> what normally turns on the deinterlacer in MythTV. I am not sure about
> the bit format of the frame, from what I understand, interlaced frames
> have two fields, arranged one after the other, like two half pictures. I
> would have expected the output to be completely corrupted if the render
> assumed it was one progressive frame when it was actually two interlaced
> fields. There is something going on that I don't understand. Further
> investigation is needed. Perhaps we can tell if it is interlaced.

Interlaced AVFrames from ffmpeg are actually interleaved. They are
not represented as one entire field on top of the other (i.e. two half
pictures). Hence if you eyeball an interlaced frame it will basically
look right structurally speaking, although every other line of the
file is from a different point in time (i.e. lines 0,2,4 are from time
X and lines 1,3,5 are from time X+1). You need to use the AVFrame
top_field_first indicator to know which set of lines is to be played
out first, temporally speaking. Not taking into account the
top_field_first flag can result in playing out the fields in the order
"2,1,4,3" instead of "1,2,3,4", which causes rather obvious visual
stuttering.

Devin

--
Devin J. Heitmueller
http://www.devinheitmueller.com
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
On Thu, Dec 13, 2018 at 11:22:59AM +0100, Jean-Yves Avenard wrote:
> On Thu, 13 Dec 2018 at 04:28, David Engel <david@istwok.net> wrote:
>
> > I'm confused again. I thought we were talking about yv12 but you're
> > talking mostly about nv12. I know there are a plethora of pixel
> > formats but it's still mostly Greek alphabet soup to me until I get
> > further up to speed. If the hardware decoders generate nv12, how does
> > yv12 fit in? Is it a format that has to be converted to on the way to
> > output or something else?
>
> the only difference between NV12 and YUV420/YV12 is that U and V are
> interleaved for NV12.
>
> So YV12 is stored as:
> YYYY
> UUUU
> VVVV
>
> and NV12 is:
> YYYY
> UVUVUVUV
>
> that's it.
> information is identical just stored differently.
>
> YV12 fits in because all FFmpeg decoder outputs that format.
> If you have no hardware decoder you will get YV12 out
> If you use a hardware decoder you get NV12 out (with the buffer
> actually in GPU memory)

Thanks. That is very helpful.

> > I think that's kind of what I figured. The pixel format is already
> > known for the textures and handled appropriately, right? The
> > algorithm of which pixels from which lines/testures to mix with other
> > pixels remains the same.
>
> You do need to tell the GPU what textures you are you are feeding it.
> So with OpenGL, for YV12 you pass 3 textures buffer
> NV12 you pass 2 textures buffer.
>
> To show the difference on what the shader would be to access the YV12 pixel:
> float3 yuv = float3(
> tY.Sample(sSampler, aTexCoords).r,
> tCb.Sample(sSampler, aTexCoords).r,
> tCr.Sample(sSampler, aTexCoords).r);
> return CalculateYCbCrColor(yuv * vCoefficient);
>
> That's for nv12
> float y = tY.Sample(sSampler, aTexCoords).r;
> float2 cbcr = tCb.Sample(sSampler, aTexCoords).rg;
> return CalculateYCbCrColor(float3(y, cbcr) * vCoefficient);
>
> That's a D3D11 shader for Firefox OpenGL compositor (everything gets
> converted to RGB before being composited)
>
> You can see that for YV12 you access the data for U/V separately, and
> for NV12, together. that's it. 1 line change.
>
> If you were to do that in C, it would be:
> YV12
> Yvalue = Y[x] + y * strideY;
> Uvalue = U[x/2] + y * strideU;
> Vvalue = V[x/2] + y * strideV;
>
> With strideU=strideV=strideY/2
>
> For NV12:
> Yvalue = Y[x] + y * strideY;
> Uvalue = UV[int(x/2)*2] + y * strideUV;
> Vvalue = VV[int(x/2)*2+1] + y * strideUV;
>
> with strideUV = strideY

Helpful again. The C is quite understandable. The shader code is
still quite foreign to me, though.

This will hopefully help me review the yv12 changes in question.

David
--
David Engel
david@istwok.net
_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: YV12 problem [ In reply to ]
> Wiadomo?? napisana przez Jean-Yves Avenard <jyavenard@gmail.com> w dniu 12.12.2018, o godz. 08:51:
>
> Next month I'm back in Oz permanently where I have a working mythtv system. I'll be able to get back into coding a bit (plus I need a new hobby).

Jean-Yves,

new hobby?

What about making myth playing smoothly HEVEC/VP9 4k@60p on 20-30$ SoC?

Netbooted OS with zero-touch provisioning(*) to make such appliance we already have(**).
Missing is only HW assisted video decode(***) on non x86.

I’m really enjoying potential competition with LibreELEC :-)

What we can offer (compared to LibreELEC is):
-network boot (so zero install)
-zero touch provisioning (so zero initial config)
-OOB functionality (noting to install):
1.LiveTV
2.TV Recordings
3.DDiscPlay
4.Video
5.Music
6.Radio
7.Internet browsing
8.Phone calls
9.Survilance

IMHO such project can make circles around Kodi based things….

what You think?



(*) x86 needs only PXE enablement on BIOS. RPI needs nothing to boot.
So user pluging Eth, PWR and have working FE.

(**) MiniMyth2 does already x86 Intel/Nvidia/AMD, armv7 (tested on RPI2) and aach64 (working RPI3 & S905; future will be Rockchip and Allwinner)

(***) S905 & Allwinner already have already this (via V4L2 m2m ffmpeg). It is myth who is missing to use this.

_______________________________________________
mythtv-dev mailing list
mythtv-dev@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-dev
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org