Mailing List Archive

1 2 3  View All
Re: Re: Finally got a SSD drive to put my OS on [ In reply to ]
Am Wed, Apr 19, 2023 at 01:00:33PM -0700 schrieb Mark Knecht:


> I think technically they default to the physical block size internally
> and the earlier ones, attempting to be more compatible with HDDs,
> had 4K blocks. Some of the newer chips now have 16K blocks but
> still support 512B Logical Block Addressing.
>
> All of these devices are essentially small computers. They have internal
> controllers, DRAM caches usually in the 1-2GB sort of range but getting
> larger.

Actually, cheap(er) SSDs don’t have an own DRAM, but rely on the host for
this. There is an ongoing debate in tech forums whether that is a bad thing
or not. A RAM cache can help optimise writes by caching many small writes
and aggregating them into larger blocks.

> The bus speeds they quote is because data is moving for the most
> part in and out of cache in the drive.

Are you talking about the pseudo SLC cache? Because AFAIK the DRAM cache has
no influence on read performance.

> What I know I'm not sure about is how inodes factor into this.
>
> For instance:
>
> mark@science2:~$ ls -i
> 35790149 000_NOT_BACKED_UP
> 33320794 All_Files.txt
> 33337840 All_Sizes_2.txt
> 33337952 All_Sizes.txt
> 33329818 All_Sorted.txt
> 33306743 ardour_deps_install.sh
> 33309917 ardour_deps_remove.sh
> 33557560 Arena_Chess
> 33423859 Astro_Data
> 33560973 Astronomy
> 33423886 Astro_science
> 33307443 'Backup codes - Login.gov.pdf'
> 33329080 basic-install.sh
> 33558634 bin
> 33561132 biosim4_functions.txt
> 33316157 Boot_Config.txt
> 33560975 Builder
> 33338822 CFL_88_F_Bright_Syn.xsc
>
> If the inodes are on the disk then how are they
> stored? Does a single inode occupy a physical
> block? A 512 byte LBA? Something else?

man mkfs.ext4 says:
[…] the default inode size is 256 bytes for most file systems, except for
small file systems where the inode size will be 128 bytes. […]

And if a file is small enough, it can actually fit inside the inode itself,
saving the expense of another FS sector.


When formatting file systems, I usually lower the number of inodes from the
default value to gain storage space. The default is one inode per 16 kB of
FS size, which gives you 60 million inodes per TB. In practice, even one
million per TB would be overkill in a use case like Dale’s media storage.¹
Removing 59 million inodes × 256 bytes ? 15 GB of net space for each TB, not
counting extra control metadata and ext4 redundancies.

The defaults are set in /etc/mke2fs.conf. It also contains some alternative
values of bytes-per-inode for certain usage types. The type largefile
allocates one inode per 1 MB, giving you 1 million inodes per TB of space.
Since ext4 is much more efficient with inodes than ext3, it is even content
with 4 MB per inode (type largefile4), giving you 250 k inodes per TB.

For root partitions, I tend to allocate 1 million inodes, maybe some more
for a full Gentoo-based desktop due to the portage tree’s sheer number of
small files. My Surface Go’s root (Arch linux, KDE and some texlive) uses
500 k right now.


¹ Assuming one inode equals one directory or unfragmented file on ext4.
I’m not sure what the allocation size limit for one inode is, but it is
*very* large. Ext3 had a rather low limit, which is why it was so slow with
big files. But that was one of the big improvements in ext4’s extended
inodes, at the cost of double inode size to house the required metadata.

--
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

FINE: Tax for doing wrong. TAX: Fine for doing fine.
Re: Re: Finally got a SSD drive to put my OS on [ In reply to ]
Frank Steinmetzger wrote:
> <<<SNIP>>>
>
> When formatting file systems, I usually lower the number of inodes from the
> default value to gain storage space. The default is one inode per 16 kB of
> FS size, which gives you 60 million inodes per TB. In practice, even one
> million per TB would be overkill in a use case like Dale’s media storage.¹
> Removing 59 million inodes × 256 bytes ? 15 GB of net space for each TB, not
> counting extra control metadata and ext4 redundancies.
>
> The defaults are set in /etc/mke2fs.conf. It also contains some alternative
> values of bytes-per-inode for certain usage types. The type largefile
> allocates one inode per 1 MB, giving you 1 million inodes per TB of space.
> Since ext4 is much more efficient with inodes than ext3, it is even content
> with 4 MB per inode (type largefile4), giving you 250 k inodes per TB.
>
> For root partitions, I tend to allocate 1 million inodes, maybe some more
> for a full Gentoo-based desktop due to the portage tree’s sheer number of
> small files. My Surface Go’s root (Arch linux, KDE and some texlive) uses
> 500 k right now.
>
>
> ¹ Assuming one inode equals one directory or unfragmented file on ext4.
> I’m not sure what the allocation size limit for one inode is, but it is
> *very* large. Ext3 had a rather low limit, which is why it was so slow with
> big files. But that was one of the big improvements in ext4’s extended
> inodes, at the cost of double inode size to house the required metadata.
>


This is interesting.  I have been buying 16TB drives recently.  After
all, with this fiber connection and me using torrents, I can fill up a
drive pretty fast, but I am slowing down as I'm no longer needing to
find more stuff to download.  Even 10GB per TB can add up.  For a 16TB
drive, that's 160GBs at least.  That's quite a few videos.  I didn't
realize it added up that fast.  Percentage wise it isn't a lot but given
the size of the drives, it does add up quick.  If I ever rearrange my
drives again and can change the file system, I may reduce the inodes at
least on the ones I only have large files on.  Still tho, given I use
LVM and all, maybe that isn't a great idea.  As I add drives with LVM, I
assume it increases the inodes as well.  If so, then reducing inodes
should be OK.  If not, I may increase drives until it has so many large
files it still runs out of inodes.  I suspect it adds inodes when I
expand the file system tho and I can adjust without worrying about it. 
I just have to set it when I first create the file system I guess.

This is my current drive setup. 


root@fireball / # pvs -O vg_name
  PV         VG     Fmt  Attr PSize    PFree
  /dev/sda7  OS     lvm2 a--  <124.46g 21.39g
  /dev/sdf1  backup lvm2 a--   698.63g     0
  /dev/sde1  crypt  lvm2 a--    14.55t     0
  /dev/sdb1  crypt  lvm2 a--    14.55t     0
  /dev/sdh1  datavg lvm2 a--    12.73t     0
  /dev/sdc1  datavg lvm2 a--    <9.10t     0
  /dev/sdi1  home   lvm2 a--    <7.28t     0
root@fireball / #


The one marked crypt is the one that is mostly large video files.  The
one marked datavg is where I store torrents.  Let's not delve to deep
into that tho.  ;-)  As you can see, crypt has two 16TB drives now and
I'm about 90% full.  I plan to expand next month if possible.  It'll be
another 16TB drive when I do.  So, that will be three 16TB drives. 
About 43TBs.  Little math, 430GB of space for inodes.  That added up
quick. 

I wonder.  Is there a way to find out the smallest size file in a
directory or sub directory, largest files, then maybe a average file
size???  I thought about du but given the number of files I have here,
it would be a really HUGE list of files.  Could take hours or more too. 
This is what KDE properties shows.

26.1 TiB (28,700,020,905,777)

55,619 files, 1,145 sub-folders

Little math. Average file size is 460MBs. So, I wonder what all could be
changed and not risk anything??? I wonder if that is accurate enough???

Interesting info.

Dale

:-) :-)
Re: Re: Finally got a SSD drive to put my OS on [ In reply to ]
> I wonder. Is there a way to find out the smallest size file in a
directory or sub directory, largest files, then maybe a average file
size??? I thought about du but given the number of files I have here, it
would be a really HUGE list of files. Could take hours or more too. This
is what KDE properties shows.

I'm sure there are more accurate ways but

sudo ls -R / | wc

give you the number of lines returned from the ls command. It's not perfect
as there are blank lines in the ls but it's a start.

My desktop machine has about 2.2M files.

Again, there are going to be folks who can tell you how to remove blank
lines and other cruft but it's a start.

Only takes a minute to run on my Ryzen 9 5950X. YMMV.
Re: Re: Finally got a SSD drive to put my OS on [ In reply to ]
Mark Knecht wrote:
>
> > I wonder.  Is there a way to find out the smallest size file in a
> directory or sub directory, largest files, then maybe a average file
> size???  I thought about du but given the number of files I have here,
> it would be a really HUGE list of files.  Could take hours or more
> too.  This is what KDE properties shows.
>
> I'm sure there are more accurate ways but 
>
> sudo ls -R / | wc
>
> give you the number of lines returned from the ls command. It's not
> perfect as there are blank lines in the ls but it's a start.
>
> My desktop machine has about 2.2M files.
>
> Again, there are going to be folks who can tell you how to remove
> blank lines and other cruft but it's a start.
>
> Only takes a minute to run on my Ryzen 9 5950X. YMMV.
>

I did a right click on the directory in Dolphin and selected
properties.  It told me there is a little over 55,000 files.  Some 1,100
directories, not sure if directories use inodes or not.  Basically,
there is a little over 56,000 somethings on that file system.  I was
curious what the smallest file is and the largest.  No idea how to find
that really.  Even du separates by directory not individual files
regardless of directory.  At least the way I use it anyway. 

If I ever have to move things around again, I'll likely start a thread
just for figuring out the setting for inodes.  I'll likely know more
about the number of files too. 

Dale

:-)  :-) 
Re: Re: Finally got a SSD drive to put my OS on [ In reply to ]
On 4/19/23 21:23, Dale wrote:
> Mark Knecht wrote:
>>
>> > I wonder.  Is there a way to find out the smallest size file in a
>> directory or sub directory, largest files, then maybe a average file
>> size???  I thought about du but given the number of files I have here,
>> it would be a really HUGE list of files. Could take hours or more
>> too.  This is what KDE properties shows.
>>
>> I'm sure there are more accurate ways but
>>
>> sudo ls -R / | wc
>>
>> give you the number of lines returned from the ls command. It's not
>> perfect as there are blank lines in the ls but it's a start.
>>
>> My desktop machine has about 2.2M files.
>>
>> Again, there are going to be folks who can tell you how to remove
>> blank lines and other cruft but it's a start.
>>
>> Only takes a minute to run on my Ryzen 9 5950X. YMMV.
>>
>
> I did a right click on the directory in Dolphin and selected
> properties.  It told me there is a little over 55,000 files.  Some 1,100
> directories, not sure if directories use inodes or not. Basically, there
> is a little over 56,000 somethings on that file system.  I was curious
> what the smallest file is and the largest. No idea how to find that
> really.  Even du separates by directory not individual files regardless
> of directory.  At least the way I use it anyway.
>
> If I ever have to move things around again, I'll likely start a thread
> just for figuring out the setting for inodes.  I'll likely know more
> about the number of files too.
>
> Dale
>
> :-)  :-)

If you do not mind using graphical solutions, Filelight can help you
easily visualize where your largest directories and files are residing.

https://packages.gentoo.org/packages/kde-apps/filelight

> Visualise disk usage with interactive map of concentric, segmented rings

Eric
Re: Re: Finally got a SSD drive to put my OS on [ In reply to ]
Am Wed, Apr 19, 2023 at 06:32:45PM -0500 schrieb Dale:
> Frank Steinmetzger wrote:
> > <<<SNIP>>>
> >
> > When formatting file systems, I usually lower the number of inodes from the
> > default value to gain storage space. The default is one inode per 16 kB of
> > FS size, which gives you 60 million inodes per TB. In practice, even one
> > million per TB would be overkill in a use case like Dale’s media storage.¹
> > Removing 59 million inodes × 256 bytes ? 15 GB of net space for each TB, not
> > counting extra control metadata and ext4 redundancies.
>
> If I ever rearrange my
> drives again and can change the file system, I may reduce the inodes at
> least on the ones I only have large files on.  Still tho, given I use
> LVM and all, maybe that isn't a great idea.  As I add drives with LVM, I
> assume it increases the inodes as well.

I remember from yesterday that the manpage says that inodes are added
according to the bytes-per-inode value.

> I wonder.  Is there a way to find out the smallest size file in a
> directory or sub directory, largest files, then maybe a average file
> size???

The 20 smallest:
`find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`

The 20 largest: either use tail instead of head or reverse sorting with -r.
You can also first pipe the output of stat into a file so you can sort and
analyse the list more efficiently, including calculating averages.

> I thought about du but given the number of files I have here,
> it would be a really HUGE list of files.  Could take hours or more too. 

I use a “cache” of text files with file listings of all my external drives.
This allows me to glance over my entire data storage without having to plug
in any drive. It uses tree underneath to get the list:

`tree -afx -DFins --dirsfirst --du --timefmt "%Y-%m-%d %T"`

This gives me a list of all directories and files, with their full path,
date and size information and accumulated directory size in a concise
format. Add -pug to also include permissions.

--
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Computers are the most congenial product of human laziness to-date.
Re: Re: Finally got a SSD drive to put my OS on [ In reply to ]
Am Wed, Apr 19, 2023 at 06:09:15PM -0700 schrieb Mark Knecht:
> > I wonder. Is there a way to find out the smallest size file in a
> directory or sub directory, largest files, then maybe a average file
> size??? I thought about du but given the number of files I have here, it
> would be a really HUGE list of files. Could take hours or more too. This
> is what KDE properties shows.
>
> I'm sure there are more accurate ways but
>
> sudo ls -R / | wc

Number of directories (not accounting for symlinks):
find -type d | wc -l

Number of files (not accounting for symlinks):
find -type f | wc -l

> give you the number of lines returned from the ls command. It's not perfect
> as there are blank lines in the ls but it's a start.
>
> My desktop machine has about 2.2M files.
>
> Again, there are going to be folks who can tell you how to remove blank
> lines and other cruft but it's a start.

Or not produce them in the first place. ;-)

> Only takes a minute to run on my Ryzen 9 5950X. YMMV.

It’s not a question of the processor, but of the storage device. And if your
cache, because the second run will probably not use the device at all.

--
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Bosses are like timpani: the more hollow they are, the louder they sound.
Re: Re: Finally got a SSD drive to put my OS on [ In reply to ]
Frank Steinmetzger wrote:
> Am Wed, Apr 19, 2023 at 06:32:45PM -0500 schrieb Dale:
>> Frank Steinmetzger wrote:
>>> <<<SNIP>>>
>>>
>>> When formatting file systems, I usually lower the number of inodes from the
>>> default value to gain storage space. The default is one inode per 16 kB of
>>> FS size, which gives you 60 million inodes per TB. In practice, even one
>>> million per TB would be overkill in a use case like Dale’s media storage.¹
>>> Removing 59 million inodes × 256 bytes ? 15 GB of net space for each TB, not
>>> counting extra control metadata and ext4 redundancies.
>> If I ever rearrange my
>> drives again and can change the file system, I may reduce the inodes at
>> least on the ones I only have large files on.  Still tho, given I use
>> LVM and all, maybe that isn't a great idea.  As I add drives with LVM, I
>> assume it increases the inodes as well.
> I remember from yesterday that the manpage says that inodes are added
> according to the bytes-per-inode value.
>
>> I wonder.  Is there a way to find out the smallest size file in a
>> directory or sub directory, largest files, then maybe a average file
>> size???
> The 20 smallest:
> `find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`
>
> The 20 largest: either use tail instead of head or reverse sorting with -r.
> You can also first pipe the output of stat into a file so you can sort and
> analyse the list more efficiently, including calculating averages.

When I first run this while in / itself, it occurred to me that it
doesn't specify what directory.  I thought maybe changing to the
directory I want it to look at would work but get this: 


root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
-0 stat -c '%s %n' | sort -n | head -n 20`
-bash: 2: command not found
root@fireball /home/dale/Desktop/Crypt #


It works if I'm in the / directory but not when I'm cd'd to the
directory I want to know about.  I don't see a spot to change it.  Ideas.

>> I thought about du but given the number of files I have here,
>> it would be a really HUGE list of files.  Could take hours or more too. 
> I use a “cache” of text files with file listings of all my external drives.
> This allows me to glance over my entire data storage without having to plug
> in any drive. It uses tree underneath to get the list:
>
> `tree -afx -DFins --dirsfirst --du --timefmt "%Y-%m-%d %T"`
>
> This gives me a list of all directories and files, with their full path,
> date and size information and accumulated directory size in a concise
> format. Add -pug to also include permissions.
>

Save this for later use.  ;-)

Dale

:-)  :-) 
Re: Re: Finally got a SSD drive to put my OS on [ In reply to ]
On Wednesday, 19 April 2023 18:59:26 BST Dale wrote:
> Peter Humphrey wrote:
> > On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
> >> With my HDD:
> >> # smartctl -x /dev/sda | grep -i 'sector size'
> >> Sector Sizes: 512 bytes logical, 4096 bytes physical
> >
> > Or, with an NVMe drive:
> >
> > # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
> > Supported LBA Sizes (NSID 0x1)
> > Id Fmt Data Metadt Rel_Perf
> >
> > 0 + 512 0 0
> >
> > :)
>
> When I run that command, sdd is my SDD drive, ironic I know. Anyway, it
> doesn't show block sizes. It returns nothing.

I did say it was for an NVMe drive, Dale. If your drive was one of those, the
kernel would have named it /dev/nvme0n1 or similar.

--
Regards,
Peter.
Re: Re: Finally got a SSD drive to put my OS on [ In reply to ]
eric wrote:
> On 4/19/23 21:23, Dale wrote:
>> Mark Knecht wrote:
>>>
>>> > I wonder.  Is there a way to find out the smallest size file in a
>>> directory or sub directory, largest files, then maybe a average file
>>> size???  I thought about du but given the number of files I have
>>> here, it would be a really HUGE list of files. Could take hours or
>>> more too.  This is what KDE properties shows.
>>>
>>> I'm sure there are more accurate ways but
>>>
>>> sudo ls -R / | wc
>>>
>>> give you the number of lines returned from the ls command. It's not
>>> perfect as there are blank lines in the ls but it's a start.
>>>
>>> My desktop machine has about 2.2M files.
>>>
>>> Again, there are going to be folks who can tell you how to remove
>>> blank lines and other cruft but it's a start.
>>>
>>> Only takes a minute to run on my Ryzen 9 5950X. YMMV.
>>>
>>
>> I did a right click on the directory in Dolphin and selected
>> properties.  It told me there is a little over 55,000 files.  Some
>> 1,100 directories, not sure if directories use inodes or not.
>> Basically, there is a little over 56,000 somethings on that file
>> system.  I was curious what the smallest file is and the largest. No
>> idea how to find that really.  Even du separates by directory not
>> individual files regardless of directory.  At least the way I use it
>> anyway.
>>
>> If I ever have to move things around again, I'll likely start a
>> thread just for figuring out the setting for inodes.  I'll likely
>> know more about the number of files too.
>>
>> Dale
>>
>> :-)  :-)
>
> If you do not mind using graphical solutions, Filelight can help you
> easily visualize where your largest directories and files are residing.
>
> https://packages.gentoo.org/packages/kde-apps/filelight
>
>> Visualise disk usage with interactive map of concentric, segmented rings
>
> Eric
>
> .
>

There used to be a KDE app that worked a bit like this.  I liked it but
I think it died.  I haven't seen it in ages, not long after the switch
from KDE3 to KDE4 I think.  Given the volume of files and the size of
the data, I wish I could zoom in sometimes.  Those little ones disappear. 

Thanks for that info. Nifty. 

Dale

:-)  :-) 
Re: Re: Finally got a SSD drive to put my OS on [ In reply to ]
Peter Humphrey wrote:
> On Wednesday, 19 April 2023 18:59:26 BST Dale wrote:
>> Peter Humphrey wrote:
>>> On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
>>>> With my HDD:
>>>> # smartctl -x /dev/sda | grep -i 'sector size'
>>>> Sector Sizes: 512 bytes logical, 4096 bytes physical
>>> Or, with an NVMe drive:
>>>
>>> # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
>>> Supported LBA Sizes (NSID 0x1)
>>> Id Fmt Data Metadt Rel_Perf
>>>
>>> 0 + 512 0 0
>>>
>>> :)
>> When I run that command, sdd is my SDD drive, ironic I know. Anyway, it
>> doesn't show block sizes. It returns nothing.
> I did say it was for an NVMe drive, Dale. If your drive was one of those, the
> kernel would have named it /dev/nvme0n1 or similar.
>

Well, I was hoping it would work on all SDD type drives.  ;-) 

Dale

:-)  :-)
Re: Re: Finally got a SSD drive to put my OS on [ In reply to ]
On Thursday, 20 April 2023 10:29:59 BST Dale wrote:
> Frank Steinmetzger wrote:
> > Am Wed, Apr 19, 2023 at 06:32:45PM -0500 schrieb Dale:
> >> Frank Steinmetzger wrote:
> >>> <<<SNIP>>>
> >>>
> >>> When formatting file systems, I usually lower the number of inodes from
> >>> the
> >>> default value to gain storage space. The default is one inode per 16 kB
> >>> of
> >>> FS size, which gives you 60 million inodes per TB. In practice, even one
> >>> million per TB would be overkill in a use case like Dale’s media
> >>> storage.¹
> >>> Removing 59 million inodes × 256 bytes ? 15 GB of net space for each TB,
> >>> not counting extra control metadata and ext4 redundancies.
> >>
> >> If I ever rearrange my
> >> drives again and can change the file system, I may reduce the inodes at
> >> least on the ones I only have large files on. Still tho, given I use
> >> LVM and all, maybe that isn't a great idea. As I add drives with LVM, I
> >> assume it increases the inodes as well.
> >
> > I remember from yesterday that the manpage says that inodes are added
> > according to the bytes-per-inode value.
> >
> >> I wonder. Is there a way to find out the smallest size file in a
> >> directory or sub directory, largest files, then maybe a average file
> >> size???
> >
> > The 20 smallest:
> > `find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`
> >
> > The 20 largest: either use tail instead of head or reverse sorting with
> > -r.
> > You can also first pipe the output of stat into a file so you can sort and
> > analyse the list more efficiently, including calculating averages.
>
> When I first run this while in / itself, it occurred to me that it
> doesn't specify what directory. I thought maybe changing to the
> directory I want it to look at would work but get this:
>
>
> root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
> -0 stat -c '%s %n' | sort -n | head -n 20`
> -bash: 2: command not found
> root@fireball /home/dale/Desktop/Crypt #
>
>
> It works if I'm in the / directory but not when I'm cd'd to the
> directory I want to know about. I don't see a spot to change it. Ideas.

In place of "find -type..." say "find / -type..."

--
Regards,
Peter.
Re: Re: Finally got a SSD drive to put my OS on [ In reply to ]
Peter Humphrey wrote:
> On Thursday, 20 April 2023 10:29:59 BST Dale wrote:
>> Frank Steinmetzger wrote:
>>> Am Wed, Apr 19, 2023 at 06:32:45PM -0500 schrieb Dale:
>>>> Frank Steinmetzger wrote:
>>>>> <<<SNIP>>>
>>>>>
>>>>> When formatting file systems, I usually lower the number of inodes from
>>>>> the
>>>>> default value to gain storage space. The default is one inode per 16 kB
>>>>> of
>>>>> FS size, which gives you 60 million inodes per TB. In practice, even one
>>>>> million per TB would be overkill in a use case like Dale’s media
>>>>> storage.¹
>>>>> Removing 59 million inodes × 256 bytes ? 15 GB of net space for each TB,
>>>>> not counting extra control metadata and ext4 redundancies.
>>>> If I ever rearrange my
>>>> drives again and can change the file system, I may reduce the inodes at
>>>> least on the ones I only have large files on. Still tho, given I use
>>>> LVM and all, maybe that isn't a great idea. As I add drives with LVM, I
>>>> assume it increases the inodes as well.
>>> I remember from yesterday that the manpage says that inodes are added
>>> according to the bytes-per-inode value.
>>>
>>>> I wonder. Is there a way to find out the smallest size file in a
>>>> directory or sub directory, largest files, then maybe a average file
>>>> size???
>>> The 20 smallest:
>>> `find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`
>>>
>>> The 20 largest: either use tail instead of head or reverse sorting with
>>> -r.
>>> You can also first pipe the output of stat into a file so you can sort and
>>> analyse the list more efficiently, including calculating averages.
>> When I first run this while in / itself, it occurred to me that it
>> doesn't specify what directory. I thought maybe changing to the
>> directory I want it to look at would work but get this:
>>
>>
>> root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
>> -0 stat -c '%s %n' | sort -n | head -n 20`
>> -bash: 2: command not found
>> root@fireball /home/dale/Desktop/Crypt #
>>
>>
>> It works if I'm in the / directory but not when I'm cd'd to the
>> directory I want to know about. I don't see a spot to change it. Ideas.
> In place of "find -type..." say "find / -type..."
>


Ahhh, that worked.  I also realized I need to leave off the ' at the
beginning and end.  I thought I left those out.  I copy and paste a
lot.  lol 

It only took a couple dozen files to start getting up to some size. 
Most of the few small files are text files with little notes about a
video.  For example, if building something I will create a text file
that lists what is needed to build what is in the video.  Other than a
few of those, file size reaches a few 100MBs pretty quick.  So, the
number of small files is pretty small.  That is good to know. 

Thanks for the command.  I never was good with xargs, sed and such.  It
took me a while to get used to grep.  ROFL 

Dale

:-)  :-) 
Re: Re: Finally got a SSD drive to put my OS on [ In reply to ]
Am Thu, Apr 20, 2023 at 04:29:59AM -0500 schrieb Dale:

> >> I wonder.  Is there a way to find out the smallest size file in a
> >> directory or sub directory, largest files, then maybe a average file
> >> size???
> > The 20 smallest:
> > `find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`
> >
> > The 20 largest: either use tail instead of head or reverse sorting with -r.
> > You can also first pipe the output of stat into a file so you can sort and
> > analyse the list more efficiently, including calculating averages.
>
> When I first run this while in / itself, it occurred to me that it
> doesn't specify what directory.  I thought maybe changing to the
> directory I want it to look at would work but get this: 

Yeah, either cd into the directory first, or pass it to find. But it’s like
tar: I can never remember in which order I need to feed stuff to find. One
relevant addition could be -xdev, to have find halt at file system
boundaries. So:

find /path/to/dir -xdev -type f -! -type l …

> root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
> -0 stat -c '%s %n' | sort -n | head -n 20`
> -bash: 2: command not found
> root@fireball /home/dale/Desktop/Crypt #

I used the `` in the mail text as a kind of hint: “everything between is a
command”. So when you paste that into the terminal, it is executed, and the
result of it is substituted. Meaning: the command’s output is taken as the
new input and executed. And since the first word of the output was “2”, you
get that error message. Sorry about the confusion.

> >> I thought about du but given the number of files I have here,
> >> it would be a really HUGE list of files.  Could take hours or more too. 
> > I use a “cache” of text files with file listings of all my external drives.
> > This allows me to glance over my entire data storage without having to plug
> > in any drive. It uses tree underneath to get the list:
> >
> > `tree -afx -DFins --dirsfirst --du --timefmt "%Y-%m-%d %T"`
> >
> > This gives me a list of all directories and files, with their full path,
> > date and size information and accumulated directory size in a concise
> > format. Add -pug to also include permissions.
> >
>
> Save this for later use.  ;-)

I built a wrapper script around it, to which I pass the directory I want to
read (usually the root of a removable media). The script creates a new text
file, with the current date and the dircetory in its name, and compresses it
at the end. This allows me to diff those files in vim and see what changed
over time. It also updates a symlink to the current version for quick access
via bash alias.

--
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

...llaw eht no rorrim ,rorriM
Re: Finally got a SSD drive to put my OS on [ In reply to ]
On 20/04/2023 13:59, Dale wrote:
>> In place of "find -type..." say "find / -type..."
>
> Ahhh, that worked.  I also realized I need to leave off the ' at the
> beginning and end.  I thought I left those out.  I copy and paste a
> lot.  lol

Btw, if you only want to do this for the root filesystem and exclude all
other mounted filesystems, also use the -xdev option:

find / -xdev -type ...
Re: Re: Finally got a SSD drive to put my OS on [ In reply to ]
On 20/04/2023 05:23, Dale wrote:
> Some 1,100 directories, not sure if directories use inodes or not.

"Everything is a file".

A directory is just a data file with a certain structure that maps names
to inodes.

It might still be there somewhere - I can't imagine it's been deleted,
just forgotten - but I believe some editors (emacs probably) would let
you open that file, so you could rename files by editing the line that
defined them, you could unlink a file by deleting the line, etc etc.

Obviously a very dangerous mode, but Unix was always happy about handing
out powerful footguns willy nilly.

Cheers,
Wol
Re: Finally got a SSD drive to put my OS on [ In reply to ]
Frank Steinmetzger wrote:
> Am Sun, Apr 16, 2023 at 05:26:15PM -0500 schrieb Dale:
>
>>>> I'm wanting to be able to boot something from the hard drive in the
>>>> event the OS itself won't boot.  The other day I had to dig around and
>>>> find a bootable USB stick and also found a DVD.  Ended up with the DVD
>>>> working best.  I already have memtest on /boot.  Thing is, I very rarely
>>>> use it. ;-)
>>> So in the scenario you are suggesting, is grub working, giving you a
>>> boot choice screen, and your new Gentoo install is not working so
>>> you want to choose Knoppix to repair whatever is wrong with 
>>> Gentoo? 
>> Given I have a 500GB drive, I got plenty of space.  Heck, a 10GB
>> partition each is more than enough for either Knoppix or LiveGUI.  I
>> could even store info on there about drive partitions and scripts that I
>> use a lot.  Jeez, that's a idea. 
> Back in the day, I was annoyed that whenever I needed $LIVE_SYSTEM, I had to
> reformat an entire USB stick for that. In times when you don’t even get
> sticks below 8 GB anymore, I found it a waste of material and useful storage
> space.
>
> And then I found ventoy: https://www.ventoy.net/
>
> It is a mini-Bootloader which you install once to a USB device, kind-of a
> live system of its own. But when booting it, it dynamically scans the
> content of its device and creates a new boot menu from it. So you can put
> many ISOs on one device as simple files, delete them, upgrade them,
> whatever, and then you can select one to boot from. Plus, the rest of the
> stick remains usable as storage, unlike sticks that were dd’ed with an ISO.
>
> -- Grüße | Greetings | Salut | Qapla’ Please do not share anything
> from, with or about me on any social network. The four elements:
> earth, air and firewater.

I made a USB stick with Ventoy on it but hadn't had a chance to boot
anything with it until a few days ago. I currently have the following on
ONE USB stick.


CUSTOMRESCUECD-x86_64-0.12.8.iso
KNOPPIX_V9.1DVD-2021-01-25-EN.iso
livegui-amd64-20230402T170151Z.iso
memtest86.iso
systemrescuecd-x86-5.3.2.iso


I'm having trouble with the Custom Rescue one but it could be bad since
all the others work.  CRC does try to boot but then fails.  If one were
to buy a 64GB stick, one could put a LOT of images on there.  Mine is
just a 32GB and look what I got on there. 

Anyone who boots using USB on occasion should really try this thing
out.  It is as easy as Frank says.  You install the Ventoy thing on it
and then just drop the files on there.  After that, it just works.  This
thing is awesome.  Whoever came up with this thing had a really good
slide rule. This one stick replaces five doing it the old way. 

Seriously, try this thing.  Thanks to Frank for posting about it.  I'd
never heard of it. 

Dale

:-)  :-) 

P. S. I put my old Gigabyte 770T mobo back together.  My old video card
died so I had to order one of those.  It also needed a new button
battery.  Now I'm waiting on a pack of hard drives to come in.  I
ordered a pack of five 320GB drives.  For a OS, plenty big enough.  I
also ordered a pair of USB keyboards since all I have laying around is
PS/2 types now.  On my main rig, I put in another 18GB hard drive.  I'm
replacing a 16TB on one VG and plan to use the 16TB on another VG to
replace a 8GB.  That will increase the size of both VGs.  Hopefully next
month I can order the case for the new rig.  Then I get to save up for
CPU, mobo and memory.  Just a little update for those following my puter
escapades.  LOL 

1 2 3  View All