Mailing List Archive

Error: refill: tried to read 1024 bytes, got -1
Hey, all. New question: we got a flood of errors last night,
all similar to:

refill: tried to read 1024 bytes, got -1: 5 at
/usr/lib/perl5/site_perl/5.8.5//i386-linux-thread-multi/KinoSearch/Index/FieldsReader.pm
line 54

I suspect this is NFS biting us in the butt. I've been going
over the workaround as described on:

http://www.rectangular.com/kinosearch/docs/devel/KinoSearch/Docs/NFS.html

but in our particular case, switching all the searchers is a
little easier said than done. What we /do/ have is lots of
disk, so I was thinking about:

a) keeping NFS, but don't delete the old .cfs file until
we're sure all the searchers have moved on to the new
.cfs file (a day later, perhaps), or

b) dropping NFS altogether and instead rsyncing the new .cfs
file out, then the segments file, then deleting the old
.cfs file on each searcher host.

Would either of these solutions workaround the issue?

____________________________________________________________
Eamon Daly
NextWave Media Group
Tel: 773 975-1115
Fax: 773 913-0970
Error: refill: tried to read 1024 bytes, got -1 [ In reply to ]
On Feb 16, 2007, at 9:15 AM, Eamon Daly wrote:

> I suspect this is NFS biting us in the butt.

#$@%! I'm going to need a XXXX.hates-software.com subdomain just so
I can let off steam about NFS.

> Would either of these solutions workaround the issue?

For subtle reasons, neither of those would work. Please forgive me
for not elaborating, but I don't think either of us would be served
by my guiding you towards what would be a fragile, complex hack.

My current plan is to address this issue by implementing read
locking. But I'm not going to work on it any more until after
0.20_01 is released.

Marvin Humphrey
Rectangular Research
http://www.rectangular.com
Error: refill: tried to read 1024 bytes, got -1 [ In reply to ]
I never followed up on this: we were able to make the
problem go away by moving the store to a NFSv4 share. NFSv4
/does/ seem to honor the "delete on last close" behavior
necessary for KinoSearch pre-0.20.

____________________________________________________________
Eamon Daly
NextWave Media Group
Tel: 773 975-1115
Fax: 773 913-0970



----- Original Message -----
From: "Eamon Daly" <edaly@nextwavemedia.com>
To: "KinoSearch discussion forum" <kinosearch@rectangular.com>
Sent: Friday, February 16, 2007 12:15 PM
Subject: [KinoSearch] Error: refill: tried to read 1024 bytes, got -1


: Hey, all. New question: we got a flood of errors last night,
: all similar to:
:
: refill: tried to read 1024 bytes, got -1: 5 at
:
/usr/lib/perl5/site_perl/5.8.5//i386-linux-thread-multi/KinoSearch/Index/FieldsReader.pm
: line 54
:
: I suspect this is NFS biting us in the butt. I've been going
: over the workaround as described on:
:
: http://www.rectangular.com/kinosearch/docs/devel/KinoSearch/Docs/NFS.html
:
: but in our particular case, switching all the searchers is a
: little easier said than done. What we /do/ have is lots of
: disk, so I was thinking about:
:
: a) keeping NFS, but don't delete the old .cfs file until
: we're sure all the searchers have moved on to the new
: .cfs file (a day later, perhaps), or
:
: b) dropping NFS altogether and instead rsyncing the new .cfs
: file out, then the segments file, then deleting the old
: .cfs file on each searcher host.
:
: Would either of these solutions workaround the issue?
:
: ____________________________________________________________
: Eamon Daly
: NextWave Media Group
: Tel: 773 975-1115
: Fax: 773 913-0970
:
:
:
:
: _______________________________________________
: KinoSearch mailing list
: KinoSearch@rectangular.com
: http://www.rectangular.com/mailman/listinfo/kinosearch
:
Error: refill: tried to read 1024 bytes, got -1 [ In reply to ]
On May 11, 2007, at 2:11 PM, Eamon Daly wrote:

> I never followed up on this: we were able to make the
> problem go away by moving the store to a NFSv4 share. NFSv4
> /does/ seem to honor the "delete on last close" behavior
> necessary for KinoSearch pre-0.20.

Thank you for reporting back. This raises an interesting question:
should the whole elaborate locking scheme launched in 0.20_03 be
junked? Just because it took a long time to arrive at is no
justification for keeping it around.

On Linux, I believe that upgrading NFS involves upgrading the kernel,
so what we'd be talking about is requiring people to use a Linux
kernel greater than X if they want to use NFS.

Time is on our side...

Marvin Humphrey
Rectangular Research
http://www.rectangular.com/