Mailing List Archive

Database damage?
Hi:

I’m trying to resurrect my Myth system (0.28-fixes with MariaDB 5.5 on OS X). Some months ago, I noticed that all of my recordings were failing. I then found that the mythbackend log had hundreds of pairs of entries like the following:

2021-02-08 13:44:12.761647 E [55541/775] CoreContext programinfo.cpp:2616 (GetPlaybackURL) - ProgramInfo(1411_20190311025900.ts): GetPlaybackURL: '1411_20190311025900.ts' should be local, but it can not be found.
2021-02-08 13:44:12.761688 E [55541/775] CoreContext mainserver.cpp:3061 (DoHandleDeleteRecording) - MainServer: ERROR when trying to delete file: GetPlaybackURL/UNABLE/TO/FIND/LOCAL/FILE/ON/MediumMini.local/1411_20190311025900.ts. File doesn't exist. Database metadata will not be removed.

In mythfrontend, Watch Recordings/All Programs says I have 1,132 recordings consuming 4,028.30 GB. All my recordings are stored on an external drive and I have triple checked the path to the recordings directory in mythtv-setup. There are 2,090 files—recordings and previews--in the relevant directory consuming 3.96 TB. Note that I can successfully play various old recordings in mythfrontend. I don’t have MythWeb running. I think my recordings started failing when the number of bad entries got so large that it took too long trying to go through them.

I tried to run find_orppans.py. It found 673 recordings with missing files. If I select "Delete orphaned recording entries”, I get the message:

Warning: Failed to delete 'MediumMini.local: Republic of Doyle - The Driver'

Then the list of orphaned recordings, etc, is re-displayed—nothing has apparently been fixed.

Checking the MariaDB error log, quite a few tables are marked as crashed, e.g. settings, recorded, oldrecorded, etc. I have a daily backup of the database (using mythconverg_backup.pl) and I run optimize_mythdb.pl on a daily basis. Even though optimization complete successfully, it MariaDB complains the next time I start the database.

Any suggestions? I’m thinking to nuke the database and do a partial restore and then see if find_orphans.py will give me a true list of my recordings. Or is there an easy database tool that might fix whatever is wrong in situ?

If I can get the recordings back, I’m going to look into migrating to 31.

Craig

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
On Wed, 10 Feb 2021 11:44:30 -0500, you wrote:

>Hi:
>
>I?m trying to resurrect my Myth system (0.28-fixes with MariaDB 5.5 on OS X). Some months ago, I noticed that all of my recordings were failing. I then found that the mythbackend log had hundreds of pairs of entries like the following:
>
>2021-02-08 13:44:12.761647 E [55541/775] CoreContext programinfo.cpp:2616 (GetPlaybackURL) - ProgramInfo(1411_20190311025900.ts): GetPlaybackURL: '1411_20190311025900.ts' should be local, but it can not be found.
>2021-02-08 13:44:12.761688 E [55541/775] CoreContext mainserver.cpp:3061 (DoHandleDeleteRecording) - MainServer: ERROR when trying to delete file: GetPlaybackURL/UNABLE/TO/FIND/LOCAL/FILE/ON/MediumMini.local/1411_20190311025900.ts. File doesn't exist. Database metadata will not be removed.
>
>In mythfrontend, Watch Recordings/All Programs says I have 1,132 recordings consuming 4,028.30 GB. All my recordings are stored on an external drive and I have triple checked the path to the recordings directory in mythtv-setup. There are 2,090 files?recordings and previews--in the relevant directory consuming 3.96 TB. Note that I can successfully play various old recordings in mythfrontend. I don?t have MythWeb running. I think my recordings started failing when the number of bad entries got so large that it took too long trying to go through them.
>
>I tried to run find_orppans.py. It found 673 recordings with missing files. If I select "Delete orphaned recording entries?, I get the message:
>
>Warning: Failed to delete 'MediumMini.local: Republic of Doyle - The Driver'
>
>Then the list of orphaned recordings, etc, is re-displayed?nothing has apparently been fixed.

It takes time for mythbackend to delete things, so when you are
deleting a long list of missing file recordings, only one or two will
have actually been deleted by the time that find_orphans.py creates
its new list of missing file recordings. So you need to re-run
find_orphans.py and get it to re-create the list, until you see that
the deletions have been completed. When you delete recordings, you
are actually just telling mythbackend to add the delete requests to a
delete queue, which it processes in the background until it is empty.
The delete queue is saved over a shutdown of mythbackend and will be
restarted on the startup of mythbackend.

>Checking the MariaDB error log, quite a few tables are marked as crashed, e.g. settings, recorded, oldrecorded, etc. I have a daily backup of the database (using mythconverg_backup.pl) and I run optimize_mythdb.pl on a daily basis. Even though optimization complete successfully, it MariaDB complains the next time I start the database.

I have been in the situation you find yourself in and I did manage to
repair the database without having to restore from backup. So I have
some suggestions.

If you are still getting crashed tables after optimize_mythdb.pl has
been run, then it has not successfully completed optimisations. There
will likely be various log messages about this in the MariaDB logs. A
common cause of failed table repairs is running out of space on the
partition containing the database - that is what happened to me. The
process of repairing a table requires copying the table and all its
indexes to new files, deleting the old table and index files, and then
renaming the repaired files back to the original names. Tables are
repaired one at a time. The largest table is normally the
recordedseek table, so you need to go to the location of the database
tables and find out how much space recordedseek is using. This is
what I get:

root@mypvr:/var/lib/mysql/mythconverg# du -cb recordedseek*
1038 recordedseek.frm
8741298214 recordedseek.MYD
7864638464 recordedseek.MYI
16605937716 total
root@mypvr:/var/lib/mysql/mythconverg# du -ch recordedseek*
4.0K recordedseek.frm
8.2G recordedseek.MYD
7.4G recordedseek.MYI
16G total
root@mypvr:/var/lib/mysql/mythconverg# df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p4 100G 74G 22G 78% /

So the total space used by my recordedseek table and its index is 16
Gbytes, and I have 22 Gbytes free, so the recordedseek table can be
repaired.

You should also look for old files left behind by failed repair
attempts. These can need to be manually deleted. Failed copies of
recordedseek can use up a lot of space. If you find you have other
recordedseek.* files than the ones above, they are likely the result
of failed repairs and need deleting. Once all the space has been used
up by failed copies of tables, all table repairs will then fail after
that, creating more problems.

I now have a script that is run hourly by systemd to check that there
is sufficient free space on my system partition. It emails me if
there is a problem.

If the optimize_mythdb.pl script does not repair everything, you can
manually run mysqlanalyze/mysqlcheck/mysqlrepair which have more
options that can help.

Before attempting any drastic repairs, it is best to shut down MariaDB
and make a complete copy of the database files to another drive. That
way if you make a mistake you can just shut down MariaDB and copy the
backup files back to their original location and then try again.

>Any suggestions? I?m thinking to nuke the database and do a partial restore and then see if find_orphans.py will give me a true list of my recordings. Or is there an easy database tool that might fix whatever is wrong in situ?
>
>If I can get the recordings back, I?m going to look into migrating to 31.
>
>Craig
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
> On Feb 10, 2021, at 12:56 PM, Stephen Worthington <stephen_agent@jsw.gen.nz> wrote:
>
Upfront, I really appreciate your detailed response. Unfortunately I’m not quite there yet.

> On Wed, 10 Feb 2021 11:44:30 -0500, you wrote:
>
>> ...
>> I tried to run find_orppans.py. It found 673 recordings with missing files. If I select "Delete orphaned recording entries?, I get the message:
>>
>> Warning: Failed to delete 'MediumMini.local: Republic of Doyle - The Driver'
>>
>> Then the list of orphaned recordings, etc, is re-displayed?nothing has apparently been fixed.
>
> It takes time for mythbackend to delete things, so when you are
> deleting a long list of missing file recordings, only one or two will
> have actually been deleted by the time that find_orphans.py creates
> its new list of missing file recordings. So you need to re-run
> find_orphans.py and get it to re-create the list, until you see that
> the deletions have been completed. When you delete recordings, you
> are actually just telling mythbackend to add the delete requests to a
> delete queue, which it processes in the background until it is empty.
> The delete queue is saved over a shutdown of mythbackend and will be
> restarted on the startup of mythbackend.
>
I should have mentioned that in multiple attempts to run find_orphans.py, it _always_ comes back with the same message about "Republic of Doyle - The Driver’. In mythfrontend, there is no such recording visible. Months ago that when I started to failed recordings, I believe I got this message and then tried to delete this particular recording in the frontend. I don’t recall any obvious problems at that time but maybe this is what is gumming up the works for me.

Should I try zapping something in the database to remove any remnants of The Driver?

>> Checking the MariaDB error log, quite a few tables are marked as crashed, e.g. settings, recorded, oldrecorded, etc. I have a daily backup of the database (using mythconverg_backup.pl) and I run optimize_mythdb.pl on a daily basis. Even though optimization complete successfully, it MariaDB complains the next time I start the database.
>
> I have been in the situation you find yourself in and I did manage to
> repair the database without having to restore from backup. So I have
> some suggestions.
> [good advice on free space and mysqlcheck elided]

I have verified that the partition has 30+ gigs of free space.

Thanks for reminding me about mysqlcheck. I’ve now run the regular and extended checks and all tables are reporting OK. I’ll have to look further at the error log—maybe I was looking at old error messages?

Thanks again, Stephen, for taking the time to give me some helpful advice.

Craig


_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
On Wed, 10 Feb 2021 19:37:39 -0500, you wrote:

>> On Feb 10, 2021, at 12:56 PM, Stephen Worthington <stephen_agent@jsw.gen.nz> wrote:
>>
>Upfront, I really appreciate your detailed response. Unfortunately I?m not quite there yet.
>
>> On Wed, 10 Feb 2021 11:44:30 -0500, you wrote:
>>
>>> ...
>>> I tried to run find_orppans.py. It found 673 recordings with missing files. If I select "Delete orphaned recording entries?, I get the message:
>>>
>>> Warning: Failed to delete 'MediumMini.local: Republic of Doyle - The Driver'
>>>
>>> Then the list of orphaned recordings, etc, is re-displayed?nothing has apparently been fixed.
>>
>> It takes time for mythbackend to delete things, so when you are
>> deleting a long list of missing file recordings, only one or two will
>> have actually been deleted by the time that find_orphans.py creates
>> its new list of missing file recordings. So you need to re-run
>> find_orphans.py and get it to re-create the list, until you see that
>> the deletions have been completed. When you delete recordings, you
>> are actually just telling mythbackend to add the delete requests to a
>> delete queue, which it processes in the background until it is empty.
>> The delete queue is saved over a shutdown of mythbackend and will be
>> restarted on the startup of mythbackend.
>>
>I should have mentioned that in multiple attempts to run find_orphans.py, it _always_ comes back with the same message about "Republic of Doyle - The Driver?. In mythfrontend, there is no such recording visible. Months ago that when I started to failed recordings, I believe I got this message and then tried to delete this particular recording in the frontend. I don?t recall any obvious problems at that time but maybe this is what is gumming up the works for me.
>
>Should I try zapping something in the database to remove any remnants of The Driver?

It is best to investigate first before trying to manually modify the
database. One thing that sometimes happens when people say that a
recording is not visible is that it has been put in a different
recording group and mythfrontend is not displaying that recording
group at present. So first, in mythfrontend go to the recordings list
and use M(enu) > Change Group Filter and make sure it is set to "All
Programmes". Then use Ctrl-S and search for "Doyle" and see if you
can find the missing recording.

If it is still not visible, the next thing to do is to look for its
data in the recorded and recordedprogram tables and go through it all
to see if you can find any anomalies:

sudo mysql
use mythconverg;
select * from recorded where title='Republic of Doyle - The Driver'\G
select * from recordedprogram where chanid=(select chanid from
recorded where title='Republic of Doyle - The Driver') and
starttime=(select progstart from recorded where title='Republic of
Doyle - The Driver')\G

In any case, one corrupt recording in the database is unlikely to be
causing any problems with other recordings.

>>> Checking the MariaDB error log, quite a few tables are marked as crashed, e.g. settings, recorded, oldrecorded, etc. I have a daily backup of the database (using mythconverg_backup.pl) and I run optimize_mythdb.pl on a daily basis. Even though optimization complete successfully, it MariaDB complains the next time I start the database.
>>
>> I have been in the situation you find yourself in and I did manage to
>> repair the database without having to restore from backup. So I have
>> some suggestions.
>> [good advice on free space and mysqlcheck elided]
>
>I have verified that the partition has 30+ gigs of free space.
>
>Thanks for reminding me about mysqlcheck. I?ve now run the regular and extended checks and all tables are reporting OK. I?ll have to look further at the error log?maybe I was looking at old error messages?

You can run optimize_mythdb.pl manually and get its output on the
command line, rather than relying on log files.

>Thanks again, Stephen, for taking the time to give me some helpful advice.
>
>Craig
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
> On Feb 10, 2021, at 9:08 PM, Stephen Worthington <stephen_agent@jsw.gen.nz> wrote:
>
> On Wed, 10 Feb 2021 19:37:39 -0500, you wrote:
>
>>> On Feb 10, 2021, at 12:56 PM, Stephen Worthington <stephen_agent@jsw.gen.nz> wrote:
>>>
>> Upfront, I really appreciate your detailed response. Unfortunately I?m not quite there yet.
>>
>>> On Wed, 10 Feb 2021 11:44:30 -0500, you wrote:
>>>

Stephen, thank you again for all the suggestions and ideas. To close the loop on my trials...

In the end, it was mostly a hardware problem. I finally determined that I could no longer record from one of my old HDHomerun units even though the power and network lights were lit. As others have experienced, the power supply had apparently weakened such that it could not successfully tune and stream to myth. Incidentally, this unit has been powered through a UPS for most of its life. I have it back up running on a universal supply for the time being and I’ve reached out to SiliconDust about getting a replacement. After all, it is only 10 years old! ;)

The other issue is that I had nearly 700 recordings that were orphaned. The find_orphans.py script would not get rid of them. After a small test, I noticed that if I created a zero byte file (with touch), then find_orphans.py happily deleted such zero byte recordings. At least with the 4 I tested. (This is foreshadowing!) So with the output from find_orphans.py and a little massaging with awk, xargs and touch, I created zero byte ‘recordings’ for the nearly 700 remaining problems. That’s too many. When I ask find_orphans.py to delete them, the script crashes. It also crashed mythbackend a couple of times. A few (maybe 40) recordings got deleted at some point. So I still have around 750 zero byte recordings. I can probably delete the files in batches (how many per batch?) and eventually get rid of them. However, at this point, I’m so sick of messing with this shi|t that I don’t want to deal with it.

Sometimes, Myth is more work than it is worth.

Craig

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
On Sat, Feb 13, 2021 at 1:23 AM Craig Treleaven <ctreleaven@cogeco.ca> wrote:

> Incidentally, this unit has been powered through a UPS for most of its life.

Does not really matter, as the common cause of these supply
failures is well understood (aging/failing caps) and the quality
of the incoming source is typically not entirely relevant.

> I’ve reached out to SiliconDust about getting a replacement.

Replacements are available from their online shop, or if
you wish to source locally, the specs are referenced on
their forum (I think they moved them, but there is still
a reference on their forum to the new location).

> After all, it is only 10 years old! ;)

That is well in excess of the design life (3-5 years) of
most of these consumer electronics power supplies,
although as with anything else, there will be samples
that last a much shorter time, or a much longer time,
than the design life.

Should you still be using that HDHR in 5+ years,
do not be surprised if you need yet another
adapter replacement.

FWIW, I generally recommend having a "spare"
supply available for testing since any particular
supply can fail at any time (as you mention
sometimes you can find another supply that can
stand-in for operations/testing, often being used
on some other consumer electronics device).
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
On Fri, 12 Feb 2021 20:23:05 -0500, you wrote:

>> On Feb 10, 2021, at 9:08 PM, Stephen Worthington <stephen_agent@jsw.gen.nz> wrote:
>>
>> On Wed, 10 Feb 2021 19:37:39 -0500, you wrote:
>>
>>>> On Feb 10, 2021, at 12:56 PM, Stephen Worthington <stephen_agent@jsw.gen.nz> wrote:
>>>>
>>> Upfront, I really appreciate your detailed response. Unfortunately I?m not quite there yet.
>>>
>>>> On Wed, 10 Feb 2021 11:44:30 -0500, you wrote:
>>>>
>
>Stephen, thank you again for all the suggestions and ideas. To close the loop on my trials...
>
>In the end, it was mostly a hardware problem. I finally determined that I could no longer record from one of my old HDHomerun units even though the power and network lights were lit. As others have experienced, the power supply had apparently weakened such that it could not successfully tune and stream to myth. Incidentally, this unit has been powered through a UPS for most of its life. I have it back up running on a universal supply for the time being and I?ve reached out to SiliconDust about getting a replacement. After all, it is only 10 years old! ;)
>
>The other issue is that I had nearly 700 recordings that were orphaned. The find_orphans.py script would not get rid of them. After a small test, I noticed that if I created a zero byte file (with touch), then find_orphans.py happily deleted such zero byte recordings. At least with the 4 I tested. (This is foreshadowing!) So with the output from find_orphans.py and a little massaging with awk, xargs and touch, I created zero byte ?recordings? for the nearly 700 remaining problems. That?s too many. When I ask find_orphans.py to delete them, the script crashes. It also crashed mythbackend a couple of times. A few (maybe 40) recordings got deleted at some point. So I still have around 750 zero byte recordings. I can probably delete the files in batches (how many per batch?) and eventually get rid of them. However, at this point, I?m so sick of messing with this shi|t that I don?t want to deal with it.
>
>Sometimes, Myth is more work than it is worth.
>
>Craig

When you say find_orphans.py crashes, do you mean it gives you a
Python error message with a traceback? If so, then please post that
and I can look at trying to fix it.
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
On Fri, 12 Feb 2021 20:23:05 -0500, you wrote:

>> On Feb 10, 2021, at 9:08 PM, Stephen Worthington <stephen_agent@jsw.gen.nz> wrote:
>>
>> On Wed, 10 Feb 2021 19:37:39 -0500, you wrote:
>>
>>>> On Feb 10, 2021, at 12:56 PM, Stephen Worthington <stephen_agent@jsw.gen.nz> wrote:
>>>>
>>> Upfront, I really appreciate your detailed response. Unfortunately I?m not quite there yet.
>>>
>>>> On Wed, 10 Feb 2021 11:44:30 -0500, you wrote:
>>>>
>
>Stephen, thank you again for all the suggestions and ideas. To close the loop on my trials...
>
>In the end, it was mostly a hardware problem. I finally determined that I could no longer record from one of my old HDHomerun units even though the power and network lights were lit. As others have experienced, the power supply had apparently weakened such that it could not successfully tune and stream to myth. Incidentally, this unit has been powered through a UPS for most of its life. I have it back up running on a universal supply for the time being and I?ve reached out to SiliconDust about getting a replacement. After all, it is only 10 years old! ;)

It is generally not a good idea to get replacements from Silicon Dust
as the replacements will also fail in a few years. It is possible to
buy quality plug packs that do not fail like that. Here in New
Zealand, I have found the ones I get from Jaycar a bit expensive but
so far I have never had one fail.
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
On 13/02/2021 01:53, Gary Buhrmaster wrote:
> On Sat, Feb 13, 2021 at 1:23 AM Craig Treleaven <ctreleaven@cogeco.ca> wrote:
>
>> Incidentally, this unit has been powered through a UPS for most of its life.
>
> Does not really matter, as the common cause of these supply
> failures is well understood (aging/failing caps) and the quality
> of the incoming source is typically not entirely relevant.
>
>> I’ve reached out to SiliconDust about getting a replacement.
>
> Replacements are available from their online shop, or if
> you wish to source locally, the specs are referenced on
> their forum (I think they moved them, but there is still
> a reference on their forum to the new location).
>
A point, possibly off-topic. In the UK the "online shop" is now essentially a silicondust page that
buys your product through Amazon. The first HDHR dual tuner I bought came from the same page but
Amazon was not involved at that time.

I don't like Amazon or its practices so avoid it at all times. I needed another tuner, so I
eventually had my son (who buys tons through Amazon) to get it for me.

As always, YMMV.

--

Mike Perkins

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
> On Feb 12, 2021, at 9:49 PM, Stephen Worthington <stephen_agent@jsw.gen.nz> wrote:
>
> When you say find_orphans.py crashes, do you mean it gives you a
> Python error message with a traceback? If so, then please post that
> and I can look at trying to fix it.

Thanks for the offer. Note that I?m still running 0.28-fixes and the backend and database are on Mac OS X.

I did a lot of manual pruning yesterday and have the number of zero byte recordings down to 561:

[much elided?]
MediumMini.local: Wisdom of the Crowd - Root Directory 1411_20180108005900.ts
MediumMini.local: Wisdom of the Crowd - The Tipping Point 1411_20180115005900.ts
MediumMini.local: Young Sheldon - A Math Emergency and Perky Palms 1041_20190208013000.ts
MediumMini.local: Young Sheldon - A Loaf of Bread and a Grand Old Flag 1041_20190222013000.ts
Count: 561
Are you sure you want to continue?
> yes
Traceback (most recent call last):
File "/opt/local/share/mythtv/contrib/find_orphans.py", line 230, in <module>
main()
File "/opt/local/share/mythtv/contrib/find_orphans.py", line 214, in main
opt[1](opt[2])
File "/opt/local/share/mythtv/contrib/find_orphans.py", line 129, in delete_recs
rec.delete(True, True)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/dataheap.py", line 365, in delete
return self.getProgram().delete(force, rerecord)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/mythproto.py", line 964, in delete
be.forgetRecording(self)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/mythproto.py", line 661, in forgetRecording
program.toString()]))
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/mythproto.py", line 155, in backendCommand
return self._conn.command.backendCommand(data)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/connections.py", line 314, in backendCommand
self.reconnect(True)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/connections.py", line 243, in reconnect
self.connect()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/connections.py", line 224, in connect
self.socket.connect((self.host, self.port))
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/utility/other.py", line 306, in connect
socket.socket.connect(self, *args, **kwargs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 61] Connection refused


Note that the backend crashed. The be log didn?t capture anything. A subsequent run of find_orphans.py indicated there are now 560 zero-byte recordings. I did not try to determine which one was removed! ;)

Following is some version info in case it helps:


MediumMini:~ mytthtv$ python --version
Python 2.7.10
MediumMini:~ mytthtv$ port provides /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py is provided by: python27

MediumMini:~ mytthtv$ mythfrontend --version
Please attach all output as a file in bug reports.
MythTV Version : v0.28.1-e26a33c6-MacPorts
MythTV Branch : fixes/0.28
Network Protocol : 88
Library API : 0.28.20161120-1
QT Version : 5.8.0
Options compiled in:
release darwin_da using_corevideo using_backend using_bindings_perl using_bindings_python using_bindings_php using_darwin using_frontend using_hdhomerun using_vbox using_libcrypto using_libdns_sd using_libfftw3 using_libxml2 using_lirc using_mheg using_opengl using_opengl_video using_opengl_themepainter using_qtwebkit using_qtscript using_taglib using_appleremote using_bindings_perl using_bindings_python using_bindings_php using_darwin_da using_freetype2 using_mythtranscode using_opengl using_ffmpeg_threads using_mheg using_libass using_libxml2


Craig

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
> On Feb 13, 2021, at 5:26 AM, Mike Perkins <mikep@randomtraveller.org.uk> wrote:
>
> On 13/02/2021 01:53, Gary Buhrmaster wrote:
>> On Sat, Feb 13, 2021 at 1:23 AM Craig Treleaven <ctreleaven@cogeco.ca> wrote:
>>> Incidentally, this unit has been powered through a UPS for most of its life.
>> Does not really matter, as the common cause of these supply
>> failures is well understood (aging/failing caps) and the quality
>> of the incoming source is typically not entirely relevant.
>>> I’ve reached out to SiliconDust about getting a replacement.
>> Replacements are available from their online shop, or if
>> you wish to source locally, the specs are referenced on
>> their forum (I think they moved them, but there is still
>> a reference on their forum to the new location).
> A point, possibly off-topic. In the UK the "online shop" is now essentially a silicondust page that buys your product through Amazon. The first HDHR dual tuner I bought came from the same page but Amazon was not involved at that time.

Long ago, I would have bought a replacement power adapter at Radio Shack. Nowadays, I don’t know of a physical store that would have such an item in stock. Let alone a quality product. The pandemic situation is such that I don’t want to go pawing through a bunch of bins. (I’m going through chemo.)

BTW, I think SiliconDust’s reputation for problems is overblown. I purchased my first HDHomerun before they had failures with the wall warts. They clearly got a bad batch of power adapters and did a recall to make it right. Ever since, they’ve provided replacement adapters at a nominal cost. As Gary says, these things have never been expected to last forever. (And I think they are often killed by surges in lightning-prone areas of the world—which is not me.) I honestly can’t remember if I got the free replacements from SiliconDust during the recall. Either way, my HDHomeruns have outlived multiple configurations of backend hardware!

Incidentally, the power supply I’m using is a ‘universal’ type: selectable 3-12 V, interchangeable tips, and with a switch to change from centre-positive to centre-negative. However, the HDHomerun wants 5V and the nearest setting I have is 6V. Also, the tip is supposed to be 5.5mm OD but mine is 5 mm. It works but it isn’t really right. My box of used power adapters does not have anything supplying 5V. The closest I have is 9V; most are 12V.

Craig

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
Craig Treleaven says:
> The other issue is that I had nearly 700 recordings that were
> orphaned. The find_orphans.py script would not get rid of them.

The script is supposed to handle both zero-byte files unassociated with a database entry, and orphaned database entries without a corresponding file. I think what you saw is a different symptom of the general inability of the script to gracefully handle many files.

> So with the output from find_orphans.py and a little massaging with
> awk, xargs and touch, I created zero byte ‘recordings’ for the
> nearly 700 remaining problems. That’s too many. When I ask
> find_orphans.py to delete them, the script crashes. It also crashed
> mythbackend a couple of times.

<https://lists.archive.carbon60.com/mythtv/users/616731#616731> may be helpful.

--
Frontend: Apple MacBook Pro 2012, Nvidia Shield 2017
Backend: HP Microserver N40L 1.5GHz with 4x3TB HDDs
Tuners: Two over-the-air ATSC inputs with multirec
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
On Sat, 13 Feb 2021 07:47:07 -0500, you wrote:

>> On Feb 12, 2021, at 9:49 PM, Stephen Worthington <stephen_agent@jsw.gen.nz> wrote:
>>
>> When you say find_orphans.py crashes, do you mean it gives you a
>> Python error message with a traceback? If so, then please post that
>> and I can look at trying to fix it.
>
>Thanks for the offer. Note that I?m still running 0.28-fixes and the backend and database are on Mac OS X.
>
>I did a lot of manual pruning yesterday and have the number of zero byte recordings down to 561:
>
> [much elided?]
> MediumMini.local: Wisdom of the Crowd - Root Directory 1411_20180108005900.ts
> MediumMini.local: Wisdom of the Crowd - The Tipping Point 1411_20180115005900.ts
> MediumMini.local: Young Sheldon - A Math Emergency and Perky Palms 1041_20190208013000.ts
> MediumMini.local: Young Sheldon - A Loaf of Bread and a Grand Old Flag 1041_20190222013000.ts
> Count: 561
>Are you sure you want to continue?
>> yes
>Traceback (most recent call last):
> File "/opt/local/share/mythtv/contrib/find_orphans.py", line 230, in <module>
> main()
> File "/opt/local/share/mythtv/contrib/find_orphans.py", line 214, in main
> opt[1](opt[2])
> File "/opt/local/share/mythtv/contrib/find_orphans.py", line 129, in delete_recs
> rec.delete(True, True)
> File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/dataheap.py", line 365, in delete
> return self.getProgram().delete(force, rerecord)
> File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/mythproto.py", line 964, in delete
> be.forgetRecording(self)
> File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/mythproto.py", line 661, in forgetRecording
> program.toString()]))
> File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/mythproto.py", line 155, in backendCommand
> return self._conn.command.backendCommand(data)
> File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/connections.py", line 314, in backendCommand
> self.reconnect(True)
> File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/connections.py", line 243, in reconnect
> self.connect()
> File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/connections.py", line 224, in connect
> self.socket.connect((self.host, self.port))
> File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/MythTV/utility/other.py", line 306, in connect
> socket.socket.connect(self, *args, **kwargs)
> File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 228, in meth
> return getattr(self._sock,name)(*args)
>socket.error: [Errno 61] Connection refused
>
>
>Note that the backend crashed. The be log didn?t capture anything. A subsequent run of find_orphans.py indicated there are now 560 zero-byte recordings. I did not try to determine which one was removed! ;)
>
>Following is some version info in case it helps:
>
>
>MediumMini:~ mytthtv$ python --version
>Python 2.7.10
>MediumMini:~ mytthtv$ port provides /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py
>/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py is provided by: python27
>
>MediumMini:~ mytthtv$ mythfrontend --version
>Please attach all output as a file in bug reports.
>MythTV Version : v0.28.1-e26a33c6-MacPorts
>MythTV Branch : fixes/0.28
>Network Protocol : 88
>Library API : 0.28.20161120-1
>QT Version : 5.8.0
>Options compiled in:
> release darwin_da using_corevideo using_backend using_bindings_perl using_bindings_python using_bindings_php using_darwin using_frontend using_hdhomerun using_vbox using_libcrypto using_libdns_sd using_libfftw3 using_libxml2 using_lirc using_mheg using_opengl using_opengl_video using_opengl_themepainter using_qtwebkit using_qtscript using_taglib using_appleremote using_bindings_perl using_bindings_python using_bindings_php using_darwin_da using_freetype2 using_mythtranscode using_opengl using_ffmpeg_threads using_mheg using_libass using_libxml2
>
>
>Craig

The find_orphans.py error is just saying it has lost its connection to
the backend and was unable to reconnect. So my best guess is that the
problem is in mythbackend and it crashing is the cause of the problem,
not anything in find_orphans.py. However, the mythbackend crash is
likely triggered by deleting so many recordings at once - hopefully
something that will already have been fixed in later MythTV versions.
So a possible workaround would be to just add a delay of several
seconds after each time the rec.delete() call in line 129 of
find_orphans.py is run. I would try adding "time.sleep(10)" after the
rec.delete() call. You may need to also add an "import time" line at
the start of the file if it is not already there. If that works, you
could just tell find_orphans.py to delete all the bad recordings and
leave it for a couple of hours while it happened. I seem to remember
having to do something like that when I had problems with an even
older MythTV version. If 10 seconds is insufficient, Mythbackend may
still crash at some point, but by the time it does it should have done
a lot of the deletes.
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
> On Feb 13, 2021, at 10:04 AM, Yeechang Lee <ylee@columbia.edu> wrote:
>
> Craig Treleaven says:
>> The other issue is that I had nearly 700 recordings that were
>> orphaned. The find_orphans.py script would not get rid of them.
>
> The script is supposed to handle both zero-byte files unassociated with a database entry, and orphaned database entries without a corresponding file. I think what you saw is a different symptom of the general inability of the script to gracefully handle many files.
>
>> So with the output from find_orphans.py and a little massaging with
>> awk, xargs and touch, I created zero byte ‘recordings’ for the
>> nearly 700 remaining problems. That’s too many. When I ask
>> find_orphans.py to delete them, the script crashes. It also crashed
>> mythbackend a couple of times.
>
> <https://lists.archive.carbon60.com/mythtv/users/616731#616731> may be helpful.

That sounds like a possibility. Of course, OS X doesn’t do things quite the same way. I did, however, find a blog post talking about temporarily or permanently changing the permissible number of open file descriptors:

https://becomethesolution.com/blogs/mac/increase-open-file-descriptor-limits-fix-too-many-open-files-errors-mac-os-x-10-14

On my backend, it shows a soft limit per process of 256 fd's.

MediumMini:~ mytthtv$ launchctl limit maxfiles
maxfiles 256 unlimited

I’m not clear, however, if I use the launchctl command to temporarily increase the limit, does it mean that I would need to kill and restart the backend to have it take effect? I’m a bit timid to try random advice from the internet, as well.

Craig

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
On 13/02/2021 13:06, Craig Treleaven wrote:
>
>> On Feb 13, 2021, at 5:26 AM, Mike Perkins <mikep@randomtraveller.org.uk> wrote:
>>
>> On 13/02/2021 01:53, Gary Buhrmaster wrote:
>>> On Sat, Feb 13, 2021 at 1:23 AM Craig Treleaven <ctreleaven@cogeco.ca> wrote:
>>>> Incidentally, this unit has been powered through a UPS for most of its life.
>>> Does not really matter, as the common cause of these supply
>>> failures is well understood (aging/failing caps) and the quality
>>> of the incoming source is typically not entirely relevant.
>>>> I’ve reached out to SiliconDust about getting a replacement.
>>> Replacements are available from their online shop, or if
>>> you wish to source locally, the specs are referenced on
>>> their forum (I think they moved them, but there is still
>>> a reference on their forum to the new location).
>> A point, possibly off-topic. In the UK the "online shop" is now essentially a silicondust page that buys your product through Amazon. The first HDHR dual tuner I bought came from the same page but Amazon was not involved at that time.
>
> Long ago, I would have bought a replacement power adapter at Radio Shack. Nowadays, I don’t know of a physical store that would have such an item in stock. Let alone a quality product. The pandemic situation is such that I don’t want to go pawing through a bunch of bins. (I’m going through chemo.)
>
> BTW, I think SiliconDust’s reputation for problems is overblown. I purchased my first HDHomerun before they had failures with the wall warts. They clearly got a bad batch of power adapters and did a recall to make it right. Ever since, they’ve provided replacement adapters at a nominal cost. As Gary says, these things have never been expected to last forever. (And I think they are often killed by surges in lightning-prone areas of the world—which is not me.) I honestly can’t remember if I got the free replacements from SiliconDust during the recall. Either way, my HDHomeruns have outlived multiple configurations of backend hardware!
>
> Incidentally, the power supply I’m using is a ‘universal’ type: selectable 3-12 V, interchangeable tips, and with a switch to change from centre-positive to centre-negative. However, the HDHomerun wants 5V and the nearest setting I have is 6V. Also, the tip is supposed to be 5.5mm OD but mine is 5 mm. It works but it isn’t really right. My box of used power adapters does not have anything supplying 5V. The closest I have is 9V; most are 12V.
>
Of course you will almost always[1] have a powerful 5v supply nearby[2], if you care to make up some
adapter leads.

[1] Assuming that your backend isn't a Rpi or a NUC or something like that, of course.
[2] Also assuming that your HDHR(s) are near enough to your backend.

--

Mike Perkins

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
> On Feb 13, 2021, at 11:12 AM, Stephen Worthington <stephen_agent@jsw.gen.nz> wrote:
>
> So a possible workaround would be to just add a delay of several
> seconds after each time the rec.delete() call in line 129 of
> find_orphans.py is run. I would try adding "time.sleep(10)" after the
> rec.delete() call. You may need to also add an "import time" line at
> the start of the file if it is not already there. If that works, you
> could just tell find_orphans.py to delete all the bad recordings and
> leave it for a couple of hours while it happened. I seem to remember
> having to do something like that when I had problems with an even
> older MythTV version. If 10 seconds is insufficient, Mythbackend may
> still crash at some point, but by the time it does it should have done
> a lot of the deletes.
>

Success! I first tried a sleep interval of 2 seconds and it crashed after ~40 deletions. I upped the interval to 4 seconds and it processed 120 zero byte recordings. With an 8 second interval, it trundled through the remaining recordings and finished without error.

I guess I?ve done my spring cleaning early this year!

Thanks very much.

Craig
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
On Sat, 13 Feb 2021 13:50:30 -0500, you wrote:

>> On Feb 13, 2021, at 11:12 AM, Stephen Worthington <stephen_agent@jsw.gen.nz> wrote:
>>
>> So a possible workaround would be to just add a delay of several
>> seconds after each time the rec.delete() call in line 129 of
>> find_orphans.py is run. I would try adding "time.sleep(10)" after the
>> rec.delete() call. You may need to also add an "import time" line at
>> the start of the file if it is not already there. If that works, you
>> could just tell find_orphans.py to delete all the bad recordings and
>> leave it for a couple of hours while it happened. I seem to remember
>> having to do something like that when I had problems with an even
>> older MythTV version. If 10 seconds is insufficient, Mythbackend may
>> still crash at some point, but by the time it does it should have done
>> a lot of the deletes.
>>
>
>Success! I first tried a sleep interval of 2 seconds and it crashed after ~40 deletions. I upped the interval to 4 seconds and it processed 120 zero byte recordings. With an 8 second interval, it trundled through the remaining recordings and finished without error.
>
>I guess I?ve done my spring cleaning early this year!
>
>Thanks very much.
>
>Craig

Excellent! Mythbackend is pretty slow at doing deletions, so my
recommendation of 10 seconds was based on my experience with that,
plus a bit to be sure. Fortunately, MythTV is pretty forgiving of
crashes, but it would pay to run a full database check again, just in
case.
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
On 2021-02-12 9:52 p.m., Stephen Worthington wrote:
> It is generally not a good idea to get replacements from Silicon Dust
> as the replacements will also fail in a few years. It is possible to
> buy quality plug packs that do not fail like that. Here in New
> Zealand, I have found the ones I get from Jaycar a bit expensive but
> so far I have never had one fail.
Waaay OT, but after my first HDHR power wart failed, I purchased some
power plugs of the correct size for the HDHR and some Molex sockets and
wired the HDHR's into the 5V line of the mythbox' power supply. There is
lots of empty space in the case, so the splitter and the HDHRs ended up
mounted in the computer case: two coax cables from the antennas inbound
and two cat5 cables to the router outbound.

If I carved away some of the case, I could actually mount the router
into the computer case too, but sloth, lethargy and procrastination have
absolutely precluded me from getting to that "problem".

Geoff




_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
On 14/02/2021 03:29, Stephen Worthington wrote:
>> Success! I first tried a sleep interval of 2 seconds and it crashed after ~40 deletions. I upped the interval to 4 seconds and it processed 120 zero byte recordings. With an 8 second interval, it trundled through the remaining recordings and finished without error.
>>
>> I guess I’ve done my spring cleaning early this year!
>>
>> Thanks very much.
>>
>> Craig
>
> Excellent! Mythbackend is pretty slow at doing deletions, so my
> recommendation of 10 seconds was based on my experience with that,
> plus a bit to be sure. Fortunately, MythTV is pretty forgiving of
> crashes, but it would pay to run a full database check again, just in
> case.

Surely the crash is still a backend bug though...
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
On 14/02/2021 04:09, R. G. Newbury wrote:
> On 2021-02-12 9:52 p.m., Stephen Worthington wrote:
> Waaay OT, but after my first HDHR power wart failed, I purchased some
> power plugs of the correct size for the HDHR and some Molex sockets and
> wired the HDHR's into the 5V line of the mythbox' power supply. There is
> lots of empty space in the case, so the splitter and the HDHRs ended up
> mounted in the computer case: two coax cables from the antennas inbound
> and two cat5 cables to the router outbound.
>
> If I carved away some of the case, I could actually mount the router
> into the computer case too, but sloth, lethargy and procrastination have
> absolutely precluded me from getting to that "problem".

While that's not a bad idea (but always check that your HDHR actually
requires 5V for its power supply -- future models might not) one should
also always protect the circuit by means of a fuse that's rated the same
or a very small amount more than the original power supply.

The reason is that if there is a fault in the powered device (in this
case the HDHR) which causes it to draw more power than expected, the
computer's power supply will be able to actually supply that additional
amount of power, thereby pumping much more energy into the failing
device than its safety design assumes. This can cause a fire if not
detected in time.
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
On Sun, 14 Feb 2021 08:35:44 +0100, you wrote:

>On 14/02/2021 03:29, Stephen Worthington wrote:
>>> Success! I first tried a sleep interval of 2 seconds and it crashed after ~40 deletions. I upped the interval to 4 seconds and it processed 120 zero byte recordings. With an 8 second interval, it trundled through the remaining recordings and finished without error.
>>>
>>> I guess I?ve done my spring cleaning early this year!
>>>
>>> Thanks very much.
>>>
>>> Craig
>>
>> Excellent! Mythbackend is pretty slow at doing deletions, so my
>> recommendation of 10 seconds was based on my experience with that,
>> plus a bit to be sure. Fortunately, MythTV is pretty forgiving of
>> crashes, but it would pay to run a full database check again, just in
>> case.
>
>Surely the crash is still a backend bug though...

Yes, but any time mythbackend crashes, there is the possibility it did
it in the middle of a database operation. So there is the possibility
that a table is crashed. A worse but virtually impossible to detect
error is also possible, where it updated one table (eg recorded) but
failed to update the matching data in another table (eg
recordedprogram, recordedseek). Since it was doing deletes at the
time, that might mean that some of the recording tables have data for
the recording being deleted at that time, but others have had it
deleted.
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
On Sun, Feb 14, 2021, 02:23 Stephen Worthington <stephen_agent@jsw.gen.nz>
wrote:

> On Sun, 14 Feb 2021 08:35:44 +0100, you wrote:
>
>
> Yes, but any time mythbackend crashes, there is the possibility it did
> it in the middle of a database operation. So there is the possibility
> that a table is crashed. A worse but virtually impossible to detect
> error is also possible, where it updated one table (eg recorded) but
> failed to update the matching data in another table (eg
> recordedprogram, recordedseek).


Does this operation not exist within a START TRANSACTION ... COMMIT
statement?
If not,why not?

A transaction in this case would completely eliminate the error you propose.

Mike
Re: Database damage? [ In reply to ]
> On Feb 13, 2021, at 9:29 PM, Stephen Worthington <stephen_agent@jsw.gen.nz> wrote:
>
> Excellent! Mythbackend is pretty slow at doing deletions, so my
> recommendation of 10 seconds was based on my experience with that,
> plus a bit to be sure. Fortunately, MythTV is pretty forgiving of
> crashes, but it would pay to run a full database check again, just in
> case.

Good point. Both the mysqlcheck regular and extended checks are happy!

Craig

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Database damage? [ In reply to ]
On Sun, 14 Feb 2021 03:06:05 -0700, you wrote:

>On Sun, Feb 14, 2021, 02:23 Stephen Worthington <stephen_agent@jsw.gen.nz>
>wrote:
>
>> On Sun, 14 Feb 2021 08:35:44 +0100, you wrote:
>>
>>
>> Yes, but any time mythbackend crashes, there is the possibility it did
>> it in the middle of a database operation. So there is the possibility
>> that a table is crashed. A worse but virtually impossible to detect
>> error is also possible, where it updated one table (eg recorded) but
>> failed to update the matching data in another table (eg
>> recordedprogram, recordedseek).
>
>
>Does this operation not exist within a START TRANSACTION ... COMMIT
>statement?
>If not,why not?
>
>A transaction in this case would completely eliminate the error you propose.
>
>Mike

I do not think the MythTV code uses transactions. I just did this on
an old mythtv master I had lying around (probably v30):

grep -ir "start transa" *

There were no matches.

As best I can recall, transactions in MySQL/MariaDB are relatively new
and initially only worked on InnoDB tables. They were not around when
the basic MythTV code was written, and lots of mythconverg tables are
not InnoDB. And using them can be quite complicated too.
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org