Mailing List Archive

Long delay when starting playback - possible fix - update
A few months ago, I reported a problem where starting to play a recording
would result in a long pause, around 30 seconds, with the screen completely
frozen, before the playback would start. I found a work-round:

Every night a script is run, which essentially does this:
mysqlcheck -c mythconverg -u root
If I run the script manually, just before starting the frontend,
then I don't get the delay on starting to watch a recording.

I wondered whether it was a database corruption problem, although it never
reported finding any errors, just prints an 'OK' for each table checked.
Then, watching it run one day, I noticed that it flashed through most of
the tables, but took a long time on one of them:
mythconverg.recordedseek
I set up a cron job to check just that table, at a time just before we
start watching TV, and it makes the problem go away.

My guess is that the table contains data needed for seeking back and forth
in each of the recordings files, and is therefore very long. The checking
process perhaps leaves the table records in the recently-accessed cache,
so that the (backend, I guess) doesn't have to wait for the table to be loaded.
Does that seem reasonable? I take it nobody else has this problem.
Is it maybe a DBMS configuration issue?

_______________________________________________
mythtvnz mailing list
mythtvnz@lists.ourshack.com
https://lists.ourshack.com/mailman/listinfo/mythtvnz
Archives http://www.gossamer-threads.com/lists/mythtv/mythtvnz/
Re: Long delay when starting playback - possible fix - update [ In reply to ]
On Fri, 3 Nov 2017 11:47:22 +1300, you wrote:

>A few months ago, I reported a problem where starting to play a recording
>would result in a long pause, around 30 seconds, with the screen completely
>frozen, before the playback would start. I found a work-round:
>
>Every night a script is run, which essentially does this:
> mysqlcheck -c mythconverg -u root
>If I run the script manually, just before starting the frontend,
>then I don't get the delay on starting to watch a recording.
>
>I wondered whether it was a database corruption problem, although it never
>reported finding any errors, just prints an 'OK' for each table checked.
>Then, watching it run one day, I noticed that it flashed through most of
>the tables, but took a long time on one of them:
> mythconverg.recordedseek
>I set up a cron job to check just that table, at a time just before we
>start watching TV, and it makes the problem go away.
>
>My guess is that the table contains data needed for seeking back and forth
>in each of the recordings files, and is therefore very long. The checking
>process perhaps leaves the table records in the recently-accessed cache,
>so that the (backend, I guess) doesn't have to wait for the table to be loaded.
>Does that seem reasonable? I take it nobody else has this problem.
>Is it maybe a DBMS configuration issue?

The recordedseek table does indeed contain the data to allow fast
seeking in a recording, and also commercial skip information and
bookmarks. It is normally far and away the largest table in the
database. It is also the one most likely to be damaged if anything
happens while a recording is in progress (eg power cut). That is
because it is being written to all the time any recording is
happening.

In the old MythTV Control Centre (no longer maintained), there is an
option to set up automatic database checks and repairs. It has been
recommended for as long as I can remember that everyone have that set
up. Without it, if a database corruption occurs, then it will not be
repaired unless the user notices it and does it manually. The way
database corruption works is that if there is a simple corruption, it
can normally be easily repaired by mysqlcheck. However, if a corrupt
table is further corrupted (by being written to or by another event
causing further database corruption), then the table can become
unrecoverable and the only way to get it back again is to restore from
a backup, or create it from new again. So without a daily check done
on the database, you can get corruption such that the database becomes
unusable. If you have daily checks being done, that is very unlikely
to happen. In all the cases I have heard of where someone has lost a
table or a database to corruption, they had failed to have the
automatic daily checking being done.

It is also recommended to have at least weekly automatic database
backups. I personally recommend daily backups.

To enable daily database checks (in Mythbuntu), do this (with sudo if
necessary):

cp -a
/usr/share/doc/mythtv-backend/contrib/maintenance/optimize_mythdb.pl
/etc/cron.daily/optimize_mythdb
chmod u=rwx,g=rx,o=rx /etc/cron.daily/optimize_mythdb

At the end of the optimize_mythdb file, after the code that checks and
repairs the database tables, you will see this:

# Defragement seek table
if ($dbh->do("ALTER TABLE `recordedseek` ORDER BY chanid,
starttime, type")) {
print "Defragmented: recordedseek\n";
}
# Defragement program table
if ($dbh->do("ALTER TABLE `program` ORDER BY starttime, chanid"))
{
print "Defragmented: program\n";
}
# Defragement video seek table
if ($dbh->do("ALTER TABLE `filemarkup` ORDER BY filename")) {
print "Defragmented: filemarkup\n";
}

What that does is to make mysql copy the entire table to a new table
with the rows in the order specified, then delete the old table and
use the new one. That defragments where the table's storage on disk
is, and also by ordering the rows puts the rows that are needed for
playing back a file in the right order within the files for the table
so that they are next to each other on disk. I think that by doing
that, your seek table problems should go away.

WARNING: As the defragmenting needs to make a copy of all of the files
for the table being defragmented, you will need to make sure that the
partition containing the mythconverg database has enough free space
for that to happen. The recordedseek table gets massively big if you
have lots of recordings, so you need to regularly check that you have
enough spare space for the copies. As an example, here is are my
recordedseek files:

root@mypvr:/var/lib/mysql/mythconverg# ll -h recordedseek.*
-rw-rw---- 1 mysql mysql 1.1K Aug 6 07:44 recordedseek.frm
-rw-rw---- 1 mysql mysql 5.3G Nov 3 13:18 recordedseek.MYD
-rw-rw---- 1 mysql mysql 4.9G Nov 3 13:18 recordedseek.MYI

So I need 5.3 + 4.9 = 10.2 Gibytes of free space on my system SSD
partition to do the defragmentation. And if the table gets corrupted
and is automatically repaired by the daily check, it will also need
the same amount of free space at that time.

Due to how SSDs work, defragmenting may not be necessary if you have
your database on an SSD. That is because SSDs are truly random access
with the time taken to read any location in a partition taking the
same time. Hard drives have to move their heads from one cylinder to
another, and that takes time in proportion to how far the heads have
to move. So on spinning rust, keeping all the data next door in the
same area of disk is important for sequential access to a file.

To enable weekly database backups, create a file
/etc/cron.weekly/mythtv-database containing this:

#!/bin/sh
# /etc/cron.weekly/mythtv-database script - check and backup
mythconverg tables
# Copyright 2005/12/02 2006/10/08 Paul Andreassen
# 2010 Mario Limonciello

set -e -u

DBNAME="mythconverg"
DEBIAN="--defaults-extra-file=/etc/mysql/debian.cnf"

# Debug:
/usr/bin/logger -p daemon.info -i -t${0##*/} "Debug: $DBNAME
cron.daily checking started."

/usr/bin/mysqlcheck $DEBIAN -s $DBNAME

# Debug:
/usr/bin/logger -p daemon.info -i -t${0##*/} "Debug: $DBNAME
cron.daily checking finished, backup starting."

/usr/share/mythtv/mythconverg_backup.pl

/usr/bin/logger -p daemon.info -i -t${0##*/} "$DBNAME checked and
backed up."

# End of file.

and do this (with sudo if necessary):

chmod u=rwx,g=rx,o=rx /etc/cron.weekly/mythtv-database

To enable daily backups, put that file in /etc/cron.daily instead.

Personally, I have modified those files to do daily backups to a
network drive on another PC, and also weekly backups locally on the
same PC (but to another drive from where the database is stored). The
backup process done by mythconverg_backup.pl is done by first backing
up the database to a text .sql file, then compressing that file using
gzip. The compression is done by reading back the .sql file and
writing a new compressed .sql.gz file in the same directory. Since
the reading and writing is taking place on the same drive, and the
file is large, there is considerable head movement between the two
files, slowing down the process. So I wrote myself a modified version
of mythconverg_backup.pl that adds an extra parameter to tell it where
there is temporary storage for the .sql file. If that temporary
storage is on another drive (especially an SSD), then the backup
process is sped up considerably. I have put my version of
mythconverg_backup.pl on my web server for anyone who would like a
copy:

http://www/jsw.gen.nz/mythtv/mythconverg_backup_jsw.pl

Using that, my daily backup looks like this:

root@mypvr:/etc/cron.daily# cat mythtv-database
#!/bin/sh
# /etc/cron.weekly/mythtv-database script - check and backup
mythconverg tables
# Copyright 2005/12/02 2006/10/08 Paul Andreassen
# 2010 Mario Limonciello

# JSW Modified for /etc/cron.daily backups to a network drive.

set -e -u

DBNAME="mythconverg"
DEBIAN="--defaults-extra-file=/etc/mysql/debian.cnf"

# Debug:
/usr/bin/logger -p daemon.info -i -t${0##*/} "Debug: $DBNAME
cron.daily checking started."

/usr/bin/mysqlcheck $DEBIAN -s $DBNAME

# Debug:
/usr/bin/logger -p daemon.info -i -t${0##*/} "Debug: $DBNAME
cron.daily checking finished, backup starting."

/usr/local/bin/mythconverg_backup_jsw.pl --directory
"/mnt/savaidh/ldrive/Backups/MythTV_db_backup/mypvr" --tempdir
"/mnt/ssd1/tmp"

/usr/bin/logger -p daemon.info -i -t${0##*/} "$DBNAME checked and
backed up."

# End of file.

Note: Once your database gets large, the checking and especially the
backup operations can cause so much database activity that it causes
problems for any recording happening at the same time. The usual
problem I have seen is that mythcommflag hangs trying to write to
recordedseek, and will have to be manually killed and restarted. But
mythbackend can also have problems writing to recordedseek and other
tables. Fortunately, the time when cron runs daily and weekly jobs is
early in the morning, typically in the 06:00-08:00 region, when
recordings are rare. But bare in mind that recordings happening at
that time may not work properly, or may need to have mythcommflag
--rebuild run on them.

_______________________________________________
mythtvnz mailing list
mythtvnz@lists.ourshack.com
https://lists.ourshack.com/mailman/listinfo/mythtvnz
Archives http://www.gossamer-threads.com/lists/mythtv/mythtvnz/