Mailing List Archive

1 2  View All
Re: Building a new MythTV Backend for 2022 [ In reply to ]
On Mon, Jan 10, 2022 at 2:17 PM Simon <linux@thehobsons.co.uk> wrote:

> Mark Wedel <mwedel@sonic.net> wrote:
>
> > Mythtv workloads are pretty kind on hard drives - it sends to be
> sequential reads and writes, which HDD are pretty good with.
>
> Reads, yes sequential (mostly)
> Writes, definitely not. Each recording causes a minimum of 2 seeks/second
> - write a bit of data, seek, update the metadata, seek, write a bit more of
> the stream, … Especially with multiple recordings, that can be a lot of
> head movement - albeit not as hard work as (say) a well worked database
> without adequate caching.
>
>
>
> > James Abernathy <jfabernathy@gmail.com> wrote:
>
> > When you mirror your SSDs are you using mdadm Mirror or ZFS Mirror. I
> know how to set up a ZFS or mdadm mirrors for data drives but If I use an
> SSD, either SATA or nmve, to boot from the setup for a mirrored boot drive
> is a lot more complicated from the blogs I've seen. How do you do it?
>
> I “just do it” !
>
> I think GRUB has dealt with this for a long time, but also as long as you
> keep to the legacy V1 metadata format for madam mirrored sets, each
> partition in the set will act (read only, for booting from) the same as the
> array itself. That’s because if you use the older format, the metadata is
> at the end of the partition and the actual data (filesystem) starts at the
> beginning of the partition. So you can boot from sda1, sdb1, or the array
> that they are part of - when I first used software raid, I don’t think GRUB
> did handle arrays for booting and “inertia” means that I still tend to use
> the V1 metadata format for my boot partition !
>
>
>
> James Abernathy <jfabernathy@gmail.com> wrote:
>
> > Based on 600 TBW, I could record all the programs in Primetime on the 4
> Major networks in the USA everyday plus some assorted other stuff at an
> average rate of 5GB/hour and use up the SSD in 24.93 years. So if I make
> it to 94, then I'll have to rebuild. :-)
>
> Or it could fail next week - but it’s OK, they’ll send you a nice new but
> blank one to replace it.
> Kingston have just sent me a new 240G drive to replace one that failed. In
> this case I could probably recover the data because it works for a while
> and then “just disappears” off the bus - not that I need to as it was half
> of a mirrored pair.
> Now, if it’s a standalone drive, with years of recordings on it - a
> replacement for the failed drive isn’t much compensation for it’s failure
> :-(
>
>
> Simon
>

I'll probably keep my mythtv and NAS data on mirrored Hard drives and not
SSDs. My current HDs have been working 24/7 without fail for 3 years this
month. I'm a firm believer that most failures of computer hardware
occur during a power cycle. I'll spend the next few weeks testing ZFS,
BTRS, and plane old mdadm on EXT4 to see what I trust for the next system I
build.

Jim A
Re: Building a new MythTV Backend for 2022 [ In reply to ]
<much snipped>

> While it is true that the DB on SSD will write more often, there is the advantage of being faster. As as safety measure, I always either use mirrors (for SSD) or RAIDZ (ZFS) for the HD.

Alain

a genuine question, not trolling, and in mind that you may be doing 'just for fun'

Since it is 'just tv' a solution like backintime is trivially easy, it's easy to make a set of backups (use one, take the others off site) I don't understand the rational. Please utter a few words.

James
Re: Building a new MythTV Backend for 2022 [ In reply to ]
> On 11 Jan 2022, at 1:27 am, Mike Perkins <mikep@randomtraveller.org.uk> wrote:
>
>>
>> Thanks for the information. Based on 600 TBW, I could record all the
>> programs in Primetime on the 4 Major networks in the USA everyday plus some
>> assorted other stuff at an average rate of 5GB/hour and use up the SSD in
>> 24.93 years. So if I make it to 94, then I'll have to rebuild. :-)
> Don't overlook those magic weasel words every salesman likes to use: "up to" :)

I won't live long enough to justify the argument, but you-tubing "testing ssd's to death" says nice things about samsung's (at least here ' is correctly used as opposed to ssd where it just looks nice) weasel words like they underestimate by 50%

James
Re: Building a new MythTV Backend for 2022 [ In reply to ]
Hoi Simon,

Monday, January 10, 2022, 8:17:21 PM, you wrote:


> Or it could fail next week - but it’s OK, they’ll send you a nice new but blank one to replace it.
> Kingston have just sent me a new 240G drive to replace one that
> failed. In this case I could probably recover the data because it
> works for a while and then “just disappears” off the bus - not that
> I need to as it was half of a mirrored pair.
> Now, if it’s a standalone drive, with years of recordings on it - a
> replacement for the failed drive isn’t much compensation for it’s failure :-(


> Simon

Very probably this is a sata controller or cable issue. I have
encountered those and since I replaced the controller it's as stable
as a rock.


Tot mails,
Hika mailto:hikavdh@gmail.com

"Zonder hoop kun je niet leven
Zonder leven is er geen hoop
Het eeuwige dilemma
Zeker als je hoop moet vernietigen om te kunnen overleven!"

De lerende Mens

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Building a new MythTV Backend for 2022 [ In reply to ]
> On 10 Jan 2022, at 6:06 pm, Mike Perkins <mikep@randomtraveller.org.uk> wrote:
>
>> If you want to record to the SSD, then you are likely to hit the
>> lifetime write limit fairly rapidly. But just running MythTV and
>> normal Linux on an SSD and there are no problems with lifetime. You
>> still need to worry about it just dying unexpectedly, like any disk
>> drive (or any electronics, for that matter).
> I would think that is the other way around. Sure you are writing TB chunks to a recording disk but it is written once and then read for a while until deleted. On the other hand that database is getting *hammered* all the time as it updates e.g. seek tables. And do not forget the daily mythfilldatabase updates! Lots and lots of small updates to files and inodes all over the place.
>
> The one thing that you can be certain of with any (currently manufactured) SSD is that it is guaranteed to fail. Once it reaches the lifetime limit then bang! it's gone. On the other hand, a looked after HDD will just keep spinning.
>
> Processor speed and memory increases are such that I don't need that extra disk write speed, not for something as non-critical as mythtv. SSDs undoubtedly have a place for certain use cases but thrashing a media database isn't it, in my view.

Mike having been there, which is why I’ve been so opinionated, once you reach lifetime limit little happens except the disk is read-only. Mo bang, no gone, nothing other than read-only.

James

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Building a new MythTV Backend for 2022 [ In reply to ]
On 1/10/22 11:17 AM, Simon wrote:
> Mark Wedel <mwedel@sonic.net> wrote:

>
>> James Abernathy <jfabernathy@gmail.com> wrote:
>
>> When you mirror your SSDs are you using mdadm Mirror or ZFS Mirror. I know how to set up a ZFS or mdadm mirrors for data drives but If I use an SSD, either SATA or nmve, to boot from the setup for a mirrored boot drive is a lot more complicated from the blogs I've seen. How do you do it?
>
> I “just do it” !
>
> I think GRUB has dealt with this for a long time, but also as long as you keep to the legacy V1 metadata format for madam mirrored sets, each partition in the set will act (read only, for booting from) the same as the array itself. That’s because if you use the older format, the metadata is at the end of the partition and the actual data (filesystem) starts at the beginning of the partition. So you can boot from sda1, sdb1, or the array that they are part of - when I first used software raid, I don’t think GRUB did handle arrays for booting and “inertia” means that I still tend to use the V1 metadata format for my boot partition !

I'm running debian, so this may differ from ubuntu.
I believe what I did was to make a single disk zpool, install on that, and then later attach the second drive to make a mirror. This all works fine for normal operation.
What does not quite seem to work fine is the updating of grub - maybe it has changed, but it used to be the case that it would only update the MBR of one of the drives, and I would have to run the command explicitly to update the MBR on the second drive in the mirror.
Note that even in the normal case (it only updating 1 drive), the system would still work fine - the problem would be if that first drive failed, it would not be able to boot from the second drive. I did some actual testing with changing the boot ordering of drives to make sure it could properly boot from either of the mirrored boot drives.

Just to add my smartmon output:
Model Number: Samsung SSD 980 1TB
Percentage Used: 4%
Data Units Read: 12,542,386 [6.42 TB]
Data Units Written: 64,390,218 [32.9 TB]
Host Read Commands: 96,159,839
Power Cycles: 53
Power On Hours: 3,636


I should probably track what is writing all that data, but I have thoughts:
- I recall there being a bit of trial and error to get the initial setup done, so I think I might have needed to copy/install on the drive a few times.
- As mentioned previously, this host acts as a server for other stuff, so that may add up, plus OS upgrades.
- Home directory is also on this - that probably isn't a lot (thought browser cache may add something, though look at space change on each daily snapshot (zfs list -t all -r ...) shows ~300 MB there.

I should probably look at it in another month and see if it changes much.

Note that doing some quick math would show that it will reach it lifetime limit when ~800 TB of data is written. Probably a wide margin of error, as that 4% could be 4.4 or 3.6, depending on how it does rounding. But 800 TB falls with the 600-1000 write cycles that I think was previously mentioned. And presuming it keeps going at this rate, it would still be 9 years before it gets there. The chance of still using this device in 9 years seems quite low.

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Building a new MythTV Backend for 2022 [ In reply to ]
On 11/01/2022 01:57, James Linder wrote:
>
>
>> On 10 Jan 2022, at 6:06 pm, Mike Perkins <mikep@randomtraveller.org.uk> wrote:
>>
>>> If you want to record to the SSD, then you are likely to hit the
>>> lifetime write limit fairly rapidly. But just running MythTV and
>>> normal Linux on an SSD and there are no problems with lifetime. You
>>> still need to worry about it just dying unexpectedly, like any disk
>>> drive (or any electronics, for that matter).
>> I would think that is the other way around. Sure you are writing TB chunks to a recording disk but it is written once and then read for a while until deleted. On the other hand that database is getting *hammered* all the time as it updates e.g. seek tables. And do not forget the daily mythfilldatabase updates! Lots and lots of small updates to files and inodes all over the place.
>>
>> The one thing that you can be certain of with any (currently manufactured) SSD is that it is guaranteed to fail. Once it reaches the lifetime limit then bang! it's gone. On the other hand, a looked after HDD will just keep spinning.
>>
>> Processor speed and memory increases are such that I don't need that extra disk write speed, not for something as non-critical as mythtv. SSDs undoubtedly have a place for certain use cases but thrashing a media database isn't it, in my view.
>
> Mike having been there, which is why I’ve been so opinionated, once you reach lifetime limit little happens except the disk is read-only. Mo bang, no gone, nothing other than read-only.
>
And of course my personal experience is the opposite. In a drawer I have 4 SSDs which just quit
dead, not at the same time or in the same box. Couldn't get anything off them at all.

In fact, if SSDs were to customarily fail to read-only that would be a much better outcome than I
could get from any HDD, but nobody is ever going to promise that.

At the moment SSDs are still /relatively/ new technology compared to HDDs, which have been around
since computers moved to transistors. I have a suspicion that makers love SSDs because they are easy
to manufacture and, since they have a guaranteed lifetime, also a guaranteed replacement cycle.

I use SSDs for laptops, thin clients and test equipment where the data volumes are small enough that
I don't care about lifetime length. I will not use them in servers, though, except perhaps as boot
media before running off LVM volumes.

As always, YMMV.

--

Mike Perkins

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Building a new MythTV Backend for 2022 [ In reply to ]
>>> And of course my personal experience is the opposite. In a drawer I have 4 SSDs which just quit
>>> dead, not at the same time or in the same box. Couldn't get anything off them at all.

Same here. My experience is that when a cell fails, it will kill the device.

Doug
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Building a new MythTV Backend for 2022 [ In reply to ]
On Mon, 10 Jan 2022 10:06:52 +0000, you wrote:

>On 10/01/2022 08:46, Stephen Worthington wrote:
>>
>> If you want to record to the SSD, then you are likely to hit the
>> lifetime write limit fairly rapidly. But just running MythTV and
>> normal Linux on an SSD and there are no problems with lifetime. You
>> still need to worry about it just dying unexpectedly, like any disk
>> drive (or any electronics, for that matter).
>>
>I would think that is the other way around. Sure you are writing TB chunks to a recording disk but
>it is written once and then read for a while until deleted. On the other hand that database is
>getting *hammered* all the time as it updates e.g. seek tables. And do not forget the daily
>mythfilldatabase updates! Lots and lots of small updates to files and inodes all over the place.
>
>The one thing that you can be certain of with any (currently manufactured) SSD is that it is
>guaranteed to fail. Once it reaches the lifetime limit then bang! it's gone. On the other hand, a
>looked after HDD will just keep spinning.
>
>Processor speed and memory increases are such that I don't need that extra disk write speed, not for
>something as non-critical as mythtv. SSDs undoubtedly have a place for certain use cases but
>thrashing a media database isn't it, in my view.

In my case, my database is massive and its speed determines the speed
of MythTV. Without an NVMe SSD, MythTV would be almost unusable for
me now. Even with the NVMe SSD (which was the fastest available when
I got it), creating a new recording rule now takes about two seconds,
and there are equivalent delays for most things that use the database.
I am looking forward to when I can replace my MythTV box with one that
has an SSD that runs at three times the current speed, as that will
make it more responsive again.

Databases are in fact a classic case for use of an SSD, depending on
the size and performance requirements. Most MythTV users have only
small databases where it does not matter much whether a hard drive or
SSD is used. Mine is in a completely different class:

MariaDB [mythconverg]> select count(*) from recorded;
+----------+
| count(*) |
+----------+
| 50999 |
+----------+
1 row in set (0.001 sec)

MariaDB [mythconverg]> select sum(filesize)/1024/1024/1024 from
recorded;
+------------------------------+
| sum(filesize)/1024/1024/1024 |
+------------------------------+
| 98676.497858861461 |
+------------------------------+
1 row in set (0.056 sec)

(98.7 Tibytes)

MariaDB [mythconverg]> select count(*) from recordedseek;
+-----------+
| count(*) |
+-----------+
| 435567904 |
+-----------+
1 row in set (0.000 sec)

root@mypvr:/# du -hc /var/lib/mysql/mythconverg
18G /var/lib/mysql/mythconverg
18G total


And you are missing how SSDs work. Flash memory can only be written
one way, typically from a 1 to a 0 (burning down). To write from a 0
to a 1, the entire flash memory block has to be erased. Flash reads
are fast, burn downs are slower but reasonably fast, and erases take
ages. When a write happens that is unable to just burn down existing
flash memory bits to do the write, the data in that block of SSD would
need to be erased before the write could take place in that block.
That would be far too slow - erase times in old flash were measured in
seconds and are still not that fast. So instead of erasing the
existing block, an SSD simply assigns a new flash block to that
address, copies the data from the old block to RAM and then writes the
changes to the RAM. The RAM copy of the block is then written to the
new block, and the old block is queued to be erased. The erasing of
blocks goes on in the background as required, without causing any
performance problems, unless there are no erased blocks available for
a new write. The SSD operating system keeps track of which flash
blocks are assigned to which addresses, and has more blocks than are
required for the address space it presents to the user, so it has
erased spares available all the time unless there is very heavy write
traffic for a long period. It also keeps track of how often a block
has been erased and uses the least erased block on the erased block
list for the next block to be written to. This spreads out the wear
on the blocks so they tend to wear out at a similar rate. When a
block fails to erase or fails to burn down, it is placed on a "do not
use" list. Eventually, there are too many blocks on the "do not use"
list and there is no block available to be assigned to an address. At
that point, the SSD is considered failed. But all the data is still
in readable blocks and can be copied off. And if you are monitoring
the SSD with SMART, you will have had lots of warnings that failure is
approaching, so if you are sensible you will have retired the SSD
before then. So you are no worse off than with a dying hard drive,
which these days also has spare sectors it can swap in to replace
failed sectors.

However, just like with hard drives, if you get a catastrophic failure
at any time, you can lose all the data on a hard drive or on an SSD.
That seems to be what has happened with your SSDs, rather than they
reached the end of their lifetimes. Many years ago, I had a whole set
of Seagate 7200.11 hard drives do the same (5 or 6 of them) - they
were just a badly designed drive.

You do need to do some calculations before you select an SSD, to see
what lifetime you can expect. To get a longer lifetime, you just buy
a bigger SSD, even if you do not need the space. My calculations for
this one said I would get 5-20 years out of it, depending on how my
database grew. So I am happy with it lasting 5.5 years so far, and I
do intend to replace it (and the motherboard and CPU) in the next year
or so. I also have some hard drives that are still going after 10
years of 24/7 use, but most tend to fail in the 5-7 year range now -
more modern drives are not built as well.
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Building a new MythTV Backend for 2022 [ In reply to ]
> On 11 Jan 2022, at 10:21 pm, Stephen Worthington <stephen_agent@jsw.gen.nz> wrote:
>
> On Mon, 10 Jan 2022 10:06:52 +0000, you wrote:
>
>> On 10/01/2022 08:46, Stephen Worthington wrote:
>>>
>>> If you want to record to the SSD, then you are likely to hit the
>>> lifetime write limit fairly rapidly. But just running MythTV and
>>> normal Linux on an SSD and there are no problems with lifetime. You
>>> still need to worry about it just dying unexpectedly, like any disk
>>> drive (or any electronics, for that matter).
>>>
>> I would think that is the other way around. Sure you are writing TB chunks to a recording disk but
>> it is written once and then read for a while until deleted. On the other hand that database is
>> getting *hammered* all the time as it updates e.g. seek tables. And do not forget the daily
>> mythfilldatabase updates! Lots and lots of small updates to files and inodes all over the place.
>>
>> The one thing that you can be certain of with any (currently manufactured) SSD is that it is
>> guaranteed to fail. Once it reaches the lifetime limit then bang! it's gone. On the other hand, a
>> looked after HDD will just keep spinning.
>>
>> Processor speed and memory increases are such that I don't need that extra disk write speed, not for
>> something as non-critical as mythtv. SSDs undoubtedly have a place for certain use cases but
>> thrashing a media database isn't it, in my view.
>
> In my case, my database is massive and its speed determines the speed
> of MythTV. Without an NVMe SSD, MythTV would be almost unusable for
> me now. Even with the NVMe SSD (which was the fastest available when
> I got it), creating a new recording rule now takes about two seconds,
> and there are equivalent delays for most things that use the database.
> I am looking forward to when I can replace my MythTV box with one that
> has an SSD that runs at three times the current speed, as that will
> make it more responsive again.
>
> Databases are in fact a classic case for use of an SSD, depending on
> the size and performance requirements. Most MythTV users have only
> small databases where it does not matter much whether a hard drive or
> SSD is used. Mine is in a completely different class:
>
> MariaDB [mythconverg]> select count(*) from recorded;
> +----------+
> | count(*) |
> +----------+
> | 50999 |
> +----------+
> 1 row in set (0.001 sec)
>
> MariaDB [mythconverg]> select sum(filesize)/1024/1024/1024 from
> recorded;
> +------------------------------+
> | sum(filesize)/1024/1024/1024 |
> +------------------------------+
> | 98676.497858861461 |
> +------------------------------+
> 1 row in set (0.056 sec)
>
> (98.7 Tibytes)
>
> MariaDB [mythconverg]> select count(*) from recordedseek;
> +-----------+
> | count(*) |
> +-----------+
> | 435567904 |
> +-----------+
> 1 row in set (0.000 sec)
>
> root@mypvr:/# du -hc /var/lib/mysql/mythconverg
> 18G /var/lib/mysql/mythconverg
> 18G total
>
>
> And you are missing how SSDs work. Flash memory can only be written
> one way, typically from a 1 to a 0 (burning down). To write from a 0
> to a 1, the entire flash memory block has to be erased. Flash reads
> are fast, burn downs are slower but reasonably fast, and erases take
> ages. When a write happens that is unable to just burn down existing
> flash memory bits to do the write, the data in that block of SSD would
> need to be erased before the write could take place in that block.
> That would be far too slow - erase times in old flash were measured in
> seconds and are still not that fast. So instead of erasing the
> existing block, an SSD simply assigns a new flash block to that
> address, copies the data from the old block to RAM and then writes the
> changes to the RAM. The RAM copy of the block is then written to the
> new block, and the old block is queued to be erased. The erasing of
> blocks goes on in the background as required, without causing any
> performance problems, unless there are no erased blocks available for
> a new write. The SSD operating system keeps track of which flash
> blocks are assigned to which addresses, and has more blocks than are
> required for the address space it presents to the user, so it has
> erased spares available all the time unless there is very heavy write
> traffic for a long period. It also keeps track of how often a block
> has been erased and uses the least erased block on the erased block
> list for the next block to be written to. This spreads out the wear
> on the blocks so they tend to wear out at a similar rate. When a
> block fails to erase or fails to burn down, it is placed on a "do not
> use" list. Eventually, there are too many blocks on the "do not use"
> list and there is no block available to be assigned to an address. At
> that point, the SSD is considered failed. But all the data is still
> in readable blocks and can be copied off. And if you are monitoring
> the SSD with SMART, you will have had lots of warnings that failure is
> approaching, so if you are sensible you will have retired the SSD
> before then. So you are no worse off than with a dying hard drive,
> which these days also has spare sectors it can swap in to replace
> failed sectors.
>
> However, just like with hard drives, if you get a catastrophic failure
> at any time, you can lose all the data on a hard drive or on an SSD.
> That seems to be what has happened with your SSDs, rather than they
> reached the end of their lifetimes. Many years ago, I had a whole set
> of Seagate 7200.11 hard drives do the same (5 or 6 of them) - they
> were just a badly designed drive.
>
> You do need to do some calculations before you select an SSD, to see
> what lifetime you can expect. To get a longer lifetime, you just buy
> a bigger SSD, even if you do not need the space. My calculations for
> this one said I would get 5-20 years out of it, depending on how my
> database grew. So I am happy with it lasting 5.5 years so far, and I
> do intend to replace it (and the motherboard and CPU) in the next year
> or so. I also have some hard drives that are still going after 10
> years of 24/7 use, but most tend to fail in the 5-7 year range now -
> more modern drives are not built as well.

Well articulated and very detailed rebuttal of the urban myth. (when ssds reach their useby then you lose everything)
James

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Building a new MythTV Backend for 2022 [ In reply to ]
On 22-01-11 22:57:51 CET, James wrote:
> Well articulated and very detailed rebuttal of the urban myth. (when ssds reach their useby then you lose everything)

German computer magazine c't tested SSD lifetime five years ago. They
exceeded their guarantee, by a factor raging from 2.5 (72 TBW/188 TB)
to 30+ (150 TBW/4623+ TB).
In a later test of cheap SSDs, the lifetime was similar, they were just
slower than the other SSDs. In this later article, they also reported
that the best SSD from the prior test now showed signs of failure after
9 months of permanent writing, now at 8000 TB.
All tested models were of 240..256 GB capacity.

Robert
_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org
Re: Building a new MythTV Backend for 2022 [ In reply to ]
Hika van den Hoven <hikavdh@gmail.com> wrote:

>> Or it could fail next week - but it’s OK, they’ll send you a nice new but blank one to replace it.
>> Kingston have just sent me a new 240G drive to replace one that
>> failed. In this case I could probably recover the data because it
>> works for a while and then “just disappears” off the bus - not that
>> I need to as it was half of a mirrored pair.

> Very probably this is a sata controller or cable issue. I have
> encountered those and since I replaced the controller it's as stable
> as a rock.

No, definitely the drive. Tried the usual swap drives round business and the fault moved with the drive, didn’t stay with the port or cable.



Mark Wedel <mwedel@sonic.net> wrote:

> What does not quite seem to work fine is the updating of grub - maybe it has changed, but it used to be the case that it would only update the MBR of one of the drives, and I would have to run the command explicitly to update the MBR on the second drive in the mirror.

That is correct, You need to run “grub-update /dev/sdX” for each drive in the mirror. I once had a machine that had grub installed on 5 drives - just because there were 5 drives in a RAID5 for the main storage, and to keep things simple I just mirrored a /boot across all 5 of them.
As you say, it’s “annoying” if you’ve forgotten this and the one drive with grub on it fails. It’s annoying, but less so, if one of your boot drives fails and the machine then tries to boot from one of your data drives due to the “boot order” setting in BIOS.



On the subject of drive speed and database size, lots of RAM helps here. If you have enough RAM to keep the database (most of the time) in RAM then speed is massively improved - you need to do some tweaking of DB engine settings to maximise this. A few weeks ago I was talking with someone who, for work, was working with a machine with multi-TB of RAM for that very purpose !
<rambling OT anecdote mode>And a good few years ago now I ran a system with SCO OpenServer (back when they did good software and didn’t sue their customers). That was limited to 460,000k of cache (funny how things like that get burned into memory !) which was statically configured - and yes it was a hard limit, I once tried setting it larger and it wouldn’t boot. That was fine “most” of the time - until a report was run that (thanks to inefficient tooling) did a full (non-indexed) join & select with a GB table - that basically stopped the machine with 99 to 100% WIO and we’d know it was happening as the alarms (otherwise known as telephones) started ringing. I later re-wrote that report in Informix, taking care to use indexes, and got the runtime down from 40 hours to 90 seconds - and it could be run during working hours.

Related to that, if the database is idle(wish), then over time its cache will get bumped out in favour of recordings. Is there a simple way to limit that ?


Simon

_______________________________________________
mythtv-users mailing list
mythtv-users@mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org

1 2  View All