Mailing List Archive

scanning petabyte-size filesystem
Hello,

I'm wondering if clamdscan can scan petabyte-size storage.

There're some MAXxxx settings in the /etc/clamd.d/scan.conf though,
is there any limitation about filesystem size to run clamdscan ?
Or it just takes an awful lot of time and there's no programmatic
limitations ?

Regards, Kazunori

_______________________________________________

clamav-users mailing list
clamav-users@lists.clamav.net
https://lists.clamav.net/mailman/listinfo/clamav-users


Help us build a comprehensive ClamAV guide:
https://github.com/vrtadmin/clamav-faq

http://www.clamav.net/contact.html#ml
Re: scanning petabyte-size filesystem [ In reply to ]
Hi there,

On Thu, 18 Jun 2020, Kazunori Ohki wrote:

> I'm wondering if clamdscan can scan petabyte-size storage.

The filesystem limits are up to your operating system. What is it?
It's also up to the filesystem utilities etc. to make data available
to ClamAV so that ClamAV can scan them in some way. There are several
ways to do that. It's all documented.

ClamAV itself doesn't really care about the size of the filesystem,
although you might care if you hand petabytes of files to ClamAV to
scan - it will take a while, and probably use a lot of energy. The
more random data that you give to ClamAV to scan, the bigger will be
the risk of a false positive.

> There're some MAXxxx settings in the /etc/clamd.d/scan.conf though,
> is there any limitation about filesystem size to run clamdscan ?

I don't know any file called 'scan.conf' but I've seen people refer to
such a file in the past. I guess it's in some distribution's packages.

You can find the man page for 'clamd.conf' on the ClamAV Website.
Please read it, noting especially the warnings that it contains about
changing some of the limits, and any part where it suggests that it is
not a good idea to delete files just because ClamAV flags them.

--

73,
Ged.

_______________________________________________

clamav-users mailing list
clamav-users@lists.clamav.net
https://lists.clamav.net/mailman/listinfo/clamav-users


Help us build a comprehensive ClamAV guide:
https://github.com/vrtadmin/clamav-faq

http://www.clamav.net/contact.html#ml
Re: scanning petabyte-size filesystem [ In reply to ]
At LEAST for the Windows build of ClamAV there are 2 GB file size limitations (and bugs) due to use of old 32-bit library functions:
https://bugzilla.clamav.net/show_bug.cgi?id=12251

I have not encountered the same problem with TB volumes though.

-----Original Message-----
From: Kazunori Ohki <k-ohki@nabe-intl.co.jp>
Sent: Thursday, June 18, 2020 6:06 AM
To: clamav-users@lists.clamav.net
Subject: [clamav-users] scanning petabyte-size filesystem

Hello,

I'm wondering if clamdscan can scan petabyte-size storage.

There're some MAXxxx settings in the /etc/clamd.d/scan.conf though, is there any limitation about filesystem size to run clamdscan ?
Or it just takes an awful lot of time and there's no programmatic limitations ?

Regards, Kazunori




_______________________________________________

clamav-users mailing list
clamav-users@lists.clamav.net
https://lists.clamav.net/mailman/listinfo/clamav-users


Help us build a comprehensive ClamAV guide:
https://github.com/vrtadmin/clamav-faq

http://www.clamav.net/contact.html#ml
Re: scanning petabyte-size filesystem [ In reply to ]
Thanks Ged,

> The filesystem limits are up to your operating system. What is it?

Operating system is CentOS7 and 8. and the filesystem is zfs.

> The more random data that you give to ClamAV to scan,
> the bigger will be the risk of a false positive.

Thanks, I'll add this to consideration.

> You can find the man page for 'clamd.conf' on the ClamAV Website.
> Please read it, noting especially the warnings that it contains about
> changing some of the limits, and any part where it suggests that it is
> not a good idea to delete files just because ClamAV flags them.

OK, understood.

Kazunori
Re: scanning petabyte-size filesystem [ In reply to ]
Thanks Andy,

> I have not encountered the same problem with TB volumes though.

Thanks for sharing this information.

Kazunori