Mailing List Archive

1 2 3 4 5 6 7  View All
Re: Re: Packages up for grabs [ In reply to ]
Am Sonntag, 16. Juni 2013, 21:33:53 schrieb Duncan:
> Tom Wijsman posted on Sun, 16 Jun 2013 20:23:24 +0200 as excerpted:
> > On Sun, 16 Jun 2013 19:21:38 +0200 Pacho Ramos <pacho@gentoo.org> wrote:
> >> El dom, 16-06-2013 a las 10:09 -0700, Brian Dolbec escribió:
> >> [...]
> >>
> >> > Thank you for considering helping. I have stayed away form the
> >> > intricate details of package management in the past, but I also do
> >> > not like how long portage is taking now for dep calculations.
> >>
> >> And, cannot that efforts be put in enhancing portage instead?
> >
> > To make you see the problems and decisions, I'm going to elaborate a
> > little and would like you to ask yourself some questions.
> >
> > Is it possible to reasonable enhance the Portage code to improve dep
> > calculations in a reasonable amount of time?
>
> TL;DR: SSDs help. =:^)
>

Some more RAM too.

--

Andreas K. Huettel
Gentoo Linux developer
dilfridge@gentoo.org
http://www.akhuettel.de/
Re: Packages up for grabs [ In reply to ]
On Sun, 16 Jun 2013 20:23:24 +0200
Tom Wijsman <TomWij@gentoo.org> wrote:
> Is it possible to reasonable enhance the Portage code to improve dep
> calculations in a reasonable amount of time?

Before you start looking at speed, you should make it do full, correct
dependency enforcing. Get it right first, and fast later.

--
Ciaran McCreesh
Re: Packages up for grabs [ In reply to ]
On 2013-06-16 06:55, Brian Dolbec wrote:
> > > Due ferringb retirement the following packages are up for grabs:
> > > dev-python/snakeoil
> > > sys-apps/pkgcore (likely to be treecleaned as it's no longer maintained
> > > and neither has eapi5 support)

> I'll take pkgcore (if somehow we can get eapi 5 finished.)

> I'll take snakeoil. I'm adding some of it's libs into catalyst

I can help with pkgcore, pkgcore-checks, and snakeoil as well. I've got
most of the EAPI 5 resolver work done in a local fork and have been
fixing other bugs I've found along the way.

Tim
Re: Re: Packages up for grabs [ In reply to ]
On Sun, 16 Jun 2013 19:33:53 +0000 (UTC)
Duncan <1i5t5.duncan@cox.net> wrote:

> TL;DR: SSDs help. =:^)

TL;DR: SSDs help, but they don't solve the underlying problem. =:-(

I have one; it's great to help make my boot short, but it isn't really
a great improvement for the Portage tree. Better I/O isn't a solution
to computational complexity; it doesn't deal with the CPU bottleneck.

Sadly, an improvement to the CPU as good as the switch from HDD to SSD,
I'm yet to see such a hardware improvement. Maybe if we stack the
transistors into the third dimension, something Intel was working on.

> Quite apart from the theory and question of making the existing code
> faster vs. a new from-scratch implementation, there's the practical
> question of what options one can actually use to deal with the
> problem /now/.

Don't rush it: Do you know the problem well? Does the solution
properly deal with it? Is it still usable some months / years from now?

> FWIW, one solution (particularly for folks who don't claim to have
> reasonable coding skills and thus have limited options in that
> regard) is to throw hardware at the problem.

Improvements in algorithmic complexity (exponential) are much bigger
than improvements you can achieve by buying new hardware (linear).

> I recently upgraded my main system to SDD. ... SNIP ... Between that
> and the 6-core bulldozer[3] I upgraded to last year, I'm quite happy
> with portage's current performance, ... SNIP ...

Ironically, you don't even fully use the CPU, but only one core of it;
I'm glad you have a 6-core processor, but to Portage it is a 1-core
during dependency tree calculation.

Portage becomes slower at a faster rate than your hardware get faster;
this will continue to be that way until you make Portage benefit of
it, or failing that you would need to come up with an alternative PM.

I didn't get my short boot from upgrading hardware alone; quite the
opposite, it was rather the results of the efforts spent on it.

> ---
> [1] I'm running ntp and the initial ntp-client connection and time
> sync takes ~12 seconds a lot of the time, just over the initial 10
> seconds down, 50 to go, trigger on openrc's 1-minute timeout.

Why do you make your boot wait for NTP to sync its time?

How could hardware make this time sync go any faster?

> [2] ... SNIP ... runs ~1 hour ... SNIP ...

Sounds great, but the same thing could run in much less time. I have
worse hardware, and it doesn't take much longer than yours do; so, I
don't really see the benefits new hardware bring to the table. And that
HDD to SSD change, that's really a once in a lifetime flood.

> [3] Also relevant, 16 gigs RAM, PORTAGETMPDIR on tmpfs.

Sounds all cool, but think about your CPU again; saturate it...

Building the Linux kernel with `make -j32 -l8` versus `make -j8` is a
huge difference; most people follow the latter instructions, without
really thinking through what actually happens with the underlying data.
The former queues up jobs for your processor; so the moment a job is
done a new job will be ready, so, you don't need to wait on the disk.

Something completely different; look at the history of data mining,
today's algorithms are much much faster than those of years ago.

Just to point out that different implementations and configurations have
much more power in cutting time than the typical hardware change does.

Though, this was pretty much OT; we're talking about the dependency tree
calculation, not about emerging which is rather a result of building
(eg. your compiler) than it has anything to do with the package manager.

PS: A take home thought: What if the hardware designers decided to not
R&D storage, then we wouldn't have a SSD; same story, different level.
Another level higher; we have physics, maybe CERN can improve hardware?
But when will that happen? Can we rely and wait on that to happen?

--
With kind regards,

Tom Wijsman (TomWij)
Gentoo Developer

E-mail address : TomWij@gentoo.org
GPG Public Key : 6D34E57D
GPG Fingerprint : C165 AF18 AB4C 400B C3D2 ABF0 95B2 1FCD 6D34 E57D
Re: Re: Packages up for grabs [ In reply to ]
On Sun, 16 Jun 2013 23:24:27 +0200
Tom Wijsman <TomWij@gentoo.org> wrote:
> I have one; it's great to help make my boot short, but it isn't really
> a great improvement for the Portage tree. Better I/O isn't a solution
> to computational complexity; it doesn't deal with the CPU bottleneck.

If the CPU is your bottleneck, Python won't help you. Python's threads
are fine for making IO easier, but the GIL prevents them from being of
any use for CPU intensive calculations.

Having said that, the CPU isn't your bottleneck.

--
Ciaran McCreesh
Re: Packages up for grabs [ In reply to ]
On 06/16/2013 11:23 AM, Tom Wijsman wrote:
> Ignoring that call graph, you could look at what has recently been
> introduced to increase the amount of time needed to calculate the
> dependency graph; you don't have to look far.
>
> http://blogs.gentoo.org/mgorny/2013/05/27/the-pointless-art-of-subslots/
>
> While I don't want point out the contents of that blog post, the title
> is relevant; implementing features like subslots on an algorithm that
> was not written with subslots in mind introduces a lot of inefficiency.

It's actually not bad, since all of the subslot rebuilds are triggered
in a single backtracking run. Anyway, I welcome having people work on
competing package managers, trying to do all of this stuff more
efficiently. :-)

> And it's not just subslots, newer features keep getting added to the
> dependency graph calculation and it gets slower and slower over time.
> It feels like revdep-rebuild moved into the dependency calculation!

I guess the main things that make it slower than it has been
historically would be things like --autounmask, --backtrack,
--complete-graph-if-new-use and --complete-graph-if-new-ver. Note that
you can use EMERGE_DEFAULT_OPTS to disable these things if you would
prefer to live without them. You might use something like --backtrack=2
if you want it to bail out early for all but the simplest backtracking
cases. Use --ignore-built-slot-operator-deps=y if you want to disable
all rebuilds involving subslots and slot-operators.
--
Thanks,
Zac
Re: Re: Packages up for grabs [ In reply to ]
On Sun, 16 Jun 2013 22:38:56 +0100
Ciaran McCreesh <ciaran.mccreesh@googlemail.com> wrote:

> On Sun, 16 Jun 2013 23:24:27 +0200
> Tom Wijsman <TomWij@gentoo.org> wrote:
> > I have one; it's great to help make my boot short, but it isn't
> > really a great improvement for the Portage tree. Better I/O isn't a
> > solution to computational complexity; it doesn't deal with the CPU
> > bottleneck.
>
> If the CPU is your bottleneck, Python won't help you. Python's threads
> are fine for making IO easier, but the GIL prevents them from being of
> any use for CPU intensive calculations.
>
> Having said that, the CPU isn't your bottleneck.

That's assuming you would go threaded, but you can also aim for lower
algorithmic complexities; the complexity makes the CPU the bottleneck.

--
With kind regards,

Tom Wijsman (TomWij)
Gentoo Developer

E-mail address : TomWij@gentoo.org
GPG Public Key : 6D34E57D
GPG Fingerprint : C165 AF18 AB4C 400B C3D2 ABF0 95B2 1FCD 6D34 E57D
Re: Packages up for grabs [ In reply to ]
On Sun, 16 Jun 2013 14:57:32 -0700
Zac Medico <zmedico@gentoo.org> wrote:
> It's actually not bad, since all of the subslot rebuilds are triggered
> in a single backtracking run. Anyway, I welcome having people work on
> competing package managers, trying to do all of this stuff more
> efficiently. :-)

I'm starting to think we're all doing this wrong by going for a naive
"single choice then backtrack" model, fully consistent or otherwise.
Perhaps we're going to have to bite the bullet and go for stronger
propagation models and one of the many better alternatives to
backtracking...

--
Ciaran McCreesh
Re: Re: Packages up for grabs [ In reply to ]
On Mon, 17 Jun 2013 00:07:57 +0200
Tom Wijsman <TomWij@gentoo.org> wrote:
> That's assuming you would go threaded, but you can also aim for lower
> algorithmic complexities; the complexity makes the CPU the bottleneck.

Dependency solving is NP-hard in theory and better than quadratic in
practice. The resolution algorithms also aren't the problem in terms of
runtime (and still won't be if we started using more sophisticated
algorithms for better decision making). The problem is simply that the
model is large and messy, and the problem being solved has all kinds
of awful corner cases that have to be considered.

(As one example, every user has somewhere between a hundred and a
thousand packages installed, each of which depends directly or
indirectly upon every other package in this collection.)

There are certainly improvements to be made, both in terms of
efficiency and correctness, but if you're looking for a big leap
forward then the most useful thing we could do is reduce or eliminate
some of the requirements that make dependency resolution such a fiddly
(not hard) problem.

--
Ciaran McCreesh
Re: Packages up for grabs [ In reply to ]
Tom Wijsman posted on Sun, 16 Jun 2013 23:24:27 +0200 as excerpted:

> On Sun, 16 Jun 2013 19:33:53 +0000 (UTC)
> Duncan <1i5t5.duncan@cox.net> wrote:
>
>> TL;DR: SSDs help. =:^)
>
> TL;DR: SSDs help, but they don't solve the underlying problem. =:-(

Well, there's the long-term fix to the underlying problem, and there's
coping strategies to help with where things are at now. I was simply
saying that an SSD helps a LOT in dealing with the inefficiencies of the
current code. See the "quite apart... practical question of ... dealing
with the problem /now/" bit quoted below.

> I have one; it's great to help make my boot short, but it isn't really a
> great improvement for the Portage tree. Better I/O isn't a solution to
> computational complexity; it doesn't deal with the CPU bottleneck.

But here, agreed with ciaranm, the cpu's not the bottleneck, at least not
from cold-cache. It doesn't even up the cpu clocking from minimum as
it's mostly filesystem access. Once the cache is warm, then yes, it ups
the CPU speed and I see the single-core behavior you mention, but cold-
cache, no way; it's I/O bound.

And with an ssd, the portage tree update (the syncs both of gentoo and
the overlays) went from a /crawling/ console scroll, to scrolling so fast
I can't read it.

>> Quite apart from the theory and question of making the existing code
>> faster vs. a new from-scratch implementation, there's the practical
>> question of what options one can actually use to deal with the problem
>> /now/.
>
> Don't rush it: Do you know the problem well? Does the solution properly
> deal with it? Is it still usable some months / years from now?

Not necessarily. But first we must /get/ to some months / years from
now, and that's a lot easier if the best is made of the current
situation, while a long term fix is being developed.

>> FWIW, one solution (particularly for folks who don't claim to have
>> reasonable coding skills and thus have limited options in that regard)
>> is to throw hardware at the problem.
>
> Improvements in algorithmic complexity (exponential) are much bigger
> than improvements you can achieve by buying new hardware (linear).

Same song different verse. Fixing the algorithmic complexity is fine and
certainly a good idea longer term, but it's not something I can use at my
next update. Throwing hardware at the problem is usable now.

>> ---
>> [1] I'm running ntp and the initial ntp-client connection and time sync
>> takes ~12 seconds a lot of the time, just over the initial 10 seconds
>> down, 50 to go, trigger on openrc's 1-minute timeout.
>
> Why do you make your boot wait for NTP to sync its time?

Well, ntpd is waiting for the initial step so it doesn't have to slew so
hard for so long if the clock's multiple seconds off.

And ntpd is in my default runlevel, with a few local service tasks that
are after * and need a good clock time anyway, so...

> How could hardware make this time sync go any faster?

Which is what I said, that as a practical matter, my boot didn't speed up
much /because/ I'm running (and waiting for) the ntp-client time-
stepper. Thus, I'd not /expect/ a hardware update (unless it's to a more
direct net connection) to help much.

>> [2] ... SNIP ... runs ~1 hour ... SNIP ...
>
> Sounds great, but the same thing could run in much less time. I have
> worse hardware, and it doesn't take much longer than yours do; so, I
> don't really see the benefits new hardware bring to the table. And that
> HDD to SSD change, that's really a once in a lifetime flood.

I expect I'm more particular than most about checking changelogs. I
certainly don't read them all, but if there's a revision-bump for
instance, I like to see what the gentoo devs considered important enough
to do a revision bump. And I religiously check portage logs, selecting
mentioned bug numbers probably about half the time, which pops up a menu
with a gentoo bug search on the number, from which I check the bug
details and sometimes the actual git commit code. For all my overlays I
check the git whatchanged logs, and I have a helper script that lets me
fetch and then check git whatchanged for a number of my live packages,
including openrc (where I switched to live-git precisely /because/ I was
following it closely enough to find the git whatchanged logs useful, both
for general information and for troubleshooting when something went wrong
-- release versions simply didn't have enough resolution, too many things
changing in each openrc release to easily track down problems and file
bugs as appropriate), as well.

And you're probably not rebuilding well over a hundred live-packages
(thank $DEITY and the devs in question for ccache!) at every update, in
addition to the usual (deep) @world version-bump and newuse updates, are
you?

Of course maybe you are, but I did specify that, and I didn't see
anything in your comments indicating anything like an apples to apples
comparision.

>> [3] Also relevant, 16 gigs RAM, PORTAGETMPDIR on tmpfs.
>
> Sounds all cool, but think about your CPU again; saturate it...
>
> Building the Linux kernel with `make -j32 -l8` versus `make -j8` is a
> huge difference; most people follow the latter instructions, without
> really thinking through what actually happens with the underlying data.
> The former queues up jobs for your processor; so the moment a job is
> done a new job will be ready, so, you don't need to wait on the disk.

Truth is, I used to run a plain make -j (no number and no -l at all) on
my kernel builds, just to watch the system stress and then so elegantly
recover. It's an amazing thing to watch, this Linux kernel thing and how
it deals with cpu oversaturation. =:^)

But I suppose I've gotten more conservative in my old age. =:^P
Needlessly oversaturating the CPU (and RAM) only slows things down and
forces cache dump and swappage. These days according to my kernel-build-
script configuration I only run -j24, which seems a reasonable balance as
it keeps the CPUs busy but stays safely enough within a few gigs of RAM
so I don't dump-cache or hit swap. Timing a kernel build from make clean
suggests it's the same sub-seconds range from -j10 or so, up to (from
memory) -j50 or so, after which build time starts to go up, not down.

> Something completely different; look at the history of data mining,
> today's algorithms are much much faster than those of years ago.
>
> Just to point out that different implementations and configurations have
> much more power in cutting time than the typical hardware change does.

I agree and am not arguing that. All I'm saying is that there are
measures that a sysadmin can take today to at least help work around the
problem, today, while all those faster algorithms are being developed,
implemented, tested and deployed. =:^)

--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
Re: Re: Packages up for grabs [ In reply to ]
On Mon, 24 Jun 2013 15:27:19 +0000 (UTC)
Duncan <1i5t5.duncan@cox.net> wrote:

> > I have one; it's great to help make my boot short, but it isn't
> > really a great improvement for the Portage tree. Better I/O isn't a
> > solution to computational complexity; it doesn't deal with the CPU
> > bottleneck.
>
> But here, agreed with ciaranm, the cpu's not the bottleneck, at least
> not from cold-cache. It doesn't even up the cpu clocking from
> minimum as it's mostly filesystem access. Once the cache is warm,
> then yes, it ups the CPU speed and I see the single-core behavior you
> mention, but cold- cache, no way; it's I/O bound.
>
> And with an ssd, the portage tree update (the syncs both of gentoo
> and the overlays) went from a /crawling/ console scroll, to scrolling
> so fast I can't read it.

We're not talking about the Portage tree update, but about the
dependency tree generation, which relies much more on the CPU than I/O.
A lot of loops inside loops inside loops, comparisons and more data
structure magic is going on; if this were optimized to be of a lower
complexity or be processed by multiple cores, this would speed up a lot.

Take a look at the profiler image and try to get a quick understanding
of the code; after following a few function calls, it will become clear.

Granted, I/O is still a part of the problem which is why I think caches
would help too; but from what I see the time / space complexity is just
too high, so you don't even have to deem this as CPU or I/O bound...

> >> Quite apart from the theory and question of making the existing
> >> code faster vs. a new from-scratch implementation, there's the
> >> practical question of what options one can actually use to deal
> >> with the problem /now/.
> >
> > Don't rush it: Do you know the problem well? Does the solution
> > properly deal with it? Is it still usable some months / years from
> > now?
>
> Not necessarily. But first we must /get/ to some months / years from
> now, and that's a lot easier if the best is made of the current
> situation, while a long term fix is being developed.

True, we have make and use the most out of Portage as long as possible.

> >> FWIW, one solution (particularly for folks who don't claim to have
> >> reasonable coding skills and thus have limited options in that
> >> regard) is to throw hardware at the problem.
> >
> > Improvements in algorithmic complexity (exponential) are much bigger
> > than improvements you can achieve by buying new hardware (linear).
>
> Same song different verse. Fixing the algorithmic complexity is fine
> and certainly a good idea longer term, but it's not something I can
> use at my next update. Throwing hardware at the problem is usable
> now.

If you have the money; yes, that's an option.

Though I think a lot of people see Linux as something you don't need to
throw a lot of money at; it should run on low end systems, and that's
kind of the type of users we shouldn't just neglect going forward.

> >> [2] ... SNIP ... runs ~1 hour ... SNIP ...
> >
> > Sounds great, but the same thing could run in much less time. I have
> > worse hardware, and it doesn't take much longer than yours do; so, I
> > don't really see the benefits new hardware bring to the table. And
> > that HDD to SSD change, that's really a once in a lifetime flood.
>
> I expect I'm more particular than most about checking changelogs. I
> certainly don't read them all, but if there's a revision-bump for
> instance, I like to see what the gentoo devs considered important
> enough to do a revision bump. And I religiously check portage logs,
> selecting mentioned bug numbers probably about half the time, which
> pops up a menu with a gentoo bug search on the number, from which I
> check the bug details and sometimes the actual git commit code. For
> all my overlays I check the git whatchanged logs, and I have a helper
> script that lets me fetch and then check git whatchanged for a number
> of my live packages, including openrc (where I switched to live-git
> precisely /because/ I was following it closely enough to find the git
> whatchanged logs useful, both for general information and for
> troubleshooting when something went wrong -- release versions simply
> didn't have enough resolution, too many things changing in each
> openrc release to easily track down problems and file bugs as
> appropriate), as well.

I stick more to releases and checking the changes for things where I
want to know the changes for; for the others, they either don't matter
or they shouldn't really hurt as a surprise. If there's something that
would really surprise me then I'd expect some news on that.

> And you're probably not rebuilding well over a hundred live-packages
> (thank $DEITY and the devs in question for ccache!) at every update,
> in addition to the usual (deep) @world version-bump and newuse
> updates, are you?

Developers rebuild those to see upcoming breakage.

Apart from that, I don't use many -9999 as to not go too unstable.

> >> [3] Also relevant, 16 gigs RAM, PORTAGETMPDIR on tmpfs.
> >
> > Sounds all cool, but think about your CPU again; saturate it...
> >
> > Building the Linux kernel with `make -j32 -l8` versus `make -j8` is
> > a huge difference; most people follow the latter instructions,
> > without really thinking through what actually happens with the
> > underlying data. The former queues up jobs for your processor; so
> > the moment a job is done a new job will be ready, so, you don't
> > need to wait on the disk.
>
> Truth is, I used to run a plain make -j (no number and no -l at all)
> on my kernel builds, just to watch the system stress and then so
> elegantly recover. It's an amazing thing to watch, this Linux kernel
> thing and how it deals with cpu oversaturation. =:^)

If you have the memory to pull it off, which involves money again.

> But I suppose I've gotten more conservative in my old age. =:^P

> Needlessly oversaturating the CPU (and RAM) only slows things down
> and forces cache dump and swappage.

The trick is to set it a bit before the point of oversaturating; low
enough so most packages don't oversaturize, it could be put more
precisely for every package but that time is better spent elsewhere

> > Something completely different; look at the history of data mining,
> > today's algorithms are much much faster than those of years ago.
> >
> > Just to point out that different implementations and configurations
> > have much more power in cutting time than the typical hardware
> > change does.
>
> I agree and am not arguing that. All I'm saying is that there are
> measures that a sysadmin can take today to at least help work around
> the problem, today, while all those faster algorithms are being
> developed, implemented, tested and deployed. =:^)

Not everyone is a sysadmin with a server; I'm just a student running a
laptop bought some years ago, and I'm kind of the type that doesn't
replace it while it still works fine otherwise. Maybe when I graduate...

I think we can both agree a faster system does a better job at it; but
they won't deal with crux of the problem, the algorithmic complexity.

Dealing with both, as you mention, is the real deal.

--
With kind regards,

Tom Wijsman (TomWij)
Gentoo Developer

E-mail address : TomWij@gentoo.org
GPG Public Key : 6D34E57D
GPG Fingerprint : C165 AF18 AB4C 400B C3D2 ABF0 95B2 1FCD 6D34 E57D
Re: Packages up for grabs [ In reply to ]
Tom Wijsman posted on Tue, 25 Jun 2013 01:18:07 +0200 as excerpted:

> On Mon, 24 Jun 2013 15:27:19 +0000 (UTC)
> Duncan <1i5t5.duncan@cox.net> wrote:
>
>> Throwing hardware at the problem is usable now.
>
> If you have the money; yes, that's an option.
>
> Though I think a lot of people see Linux as something you don't need to
> throw a lot of money at; it should run on low end systems, and that's
> kind of the type of users we shouldn't just neglect going forward.

Well, let's be honest. Anyone building packages on gentoo isn't likely
to be doing it on a truly low-end system. For general linux, yes,
agreed, but that's what puppy linux and etc are for. True there's the
masochistic types that build natively on embedded or a decade plus old
(and mid-level or lower then!) systems, but most folks with that sort of
system either have a reasonable build server to build it on, or use a pre-
built binary distro. And the masochistic types... well, if it takes an
hour to get the prompt in an emerge --ask and another day or two to
actually complete, that's simply more masochism for them to revel in. =:^P

Tho you /do/ have a point.

OTOH, some of us used to do MS or Apple or whatever and split our money
between hardware and software. Now we pay less for the software, but
that doesn't mean we /spend/ significantly less on the machines; now it's
mostly/all hardware.

I've often wondered why the hardware folks aren't all over Linux, given
the more money available for hardware it can mean, and certainly /does/
mean here.

>> Truth is, I used to run a plain make -j (no number and no -l at all) on
>> my kernel builds, just to watch the system stress and then so elegantly
>> recover. It's an amazing thing to watch, this Linux kernel thing and
>> how it deals with cpu oversaturation. =:^)
>
> If you have the memory to pull it off, which involves money again.

What was interesting was doing it without the (real) memory -- letting it
go into swap and just queue up hundreds and hundreds of jobs as the make
continued to generate more and more of them, faster than they could even
fully initialize, particularly since they were packing into swap before
they even had that chance.

And then with 500-600 jobs or more (custom kernel build, not all-yes/all-
mod config, or it'd likely have been 1200...) stacked up and gigs into
swap, watch the system finally start to slowly unwind the tangle.
Obviously the system wasn't usable for anything else during the worst of
it, but it still rather fascinates me that the kernel scheduling and code
quality in general is such that it can successfully do that and unwind it
all, without crashing or whatever. And the kernel build is one of the
few projects that's /that/ incredibly parallel, without requiring /too/
much memory per individual job, to do it in the first place.

Actually, that's probably the flip side of my getting more conservative.
The reason I /can/ get more conservative now is that I've enough cores
and memory that it's actually reasonably practical to do so. When you're
always dumping cache and/or swapping anyway, no big deal to do so a bit
more. When you have a system big enough to avoid that while still
getting reasonably large chunks of real work done, and you're no longer
used to the compromise of /having/ to dump cache, suddenly you're a lot
more sensitive to doing so at all!

>> Needlessly oversaturating the CPU (and RAM) only slows things down and
>> forces cache dump and swappage.
>
> The trick is to set it a bit before the point of oversaturating; low
> enough so most packages don't oversaturize, it could be put more
> precisely for every package but that time is better spent elsewhere

Indeed. =:^)

> Not everyone is a sysadmin with a server; I'm just a student running a
> laptop bought some years ago, and I'm kind of the type that doesn't
> replace it while it still works fine otherwise. Maybe when I graduate...

Actually, I use "sysadmin" in the literal sense, the person taking the
practical responsibility for deciding what goes on a system, when/if/what
to upgrade (or not), with particular emphasis on RESPONSIBILITY, both for
security and both keeping the system running and getting it back running
again when it breaks. Nothing in that says it has to be commercial, or
part of some huge farm of systems. For me, the person taking
responsibility (or failing to take it) for updating that third-generation
hand-me-down castoff system is as much of a sysadmin for that system, as
the guy/gal with 100 or 1000 systems (s)he's responsible for.

My perspective has always been that if all those folks running virus
infested junk out there actually took the sysadmin responsibility for the
systems they're running seriously, the virus/malware issue would cease to
be an issue at all.

Meanwhile, I'll admit my last system was rather better than average when
I first set it up (dual socket original 3-digit Opteron, that whole
spending all the money I used to spend on software, on hardware, now,
thing, my first 64-bit machine and my first and likely last real dual-
CPU... socket); in fact, compared to peers of its time it may well be the
best system I'll ever own, but that thing lasted me 8+ years. My goal
was a decade but I didn't make it as the caps on the mobo were bulging
and finally popping by the time I got rid of it. (The last month or so I
ran it, last summer here in Phoenix, it'd run if I kept it cold enough,
basically 15C or lower, so I was dressing up in a winter jacket with long
underwear and a knit hat on, with the AC running to keep it cold enough
to run the computer inside, while outside it was 40C+!)

But OTOH, that was originally a $400 mobo alone, for quite some time
worth probably 2-3 grand total as I kept upgrading bits and pieces of it
as I had the money. But FTR, I /am/ quite happy with the 6-core
Bulldozer-1 that replaced it, when I finally really had no other choice.
And the replacement was *MUCH* cheaper!

But anyway, yeah, I do know a bit about running old hardware, myself, and
know how to make those dollars strreeettcchh myself. =:^)

--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
Re: Packages up for grabs [ In reply to ]
On Sat, 12 Oct 2013 17:35:35 +0300
Pavlos Ratis <dastergon@gentoo.org> wrote:

> On Sat, Oct 12, 2013 at 10:02 AM, Pacho Ramos <pacho@gentoo.org>
> wrote:
> > Due rajiv retirement:
> > net-dns/dnstop
>
> I'll take net-dns/dnstop.
>

Please note that a proxied maintainer was assigned to this earlier
today; so, you might want to contact them:

https://bugs.gentoo.org/show_bug.cgi?id=451132

--
With kind regards,

Tom Wijsman (TomWij)
Gentoo Developer

E-mail address : TomWij@gentoo.org
GPG Public Key : 6D34E57D
GPG Fingerprint : C165 AF18 AB4C 400B C3D2 ABF0 95B2 1FCD 6D34 E57D
Re: Packages up for grabs [ In reply to ]
i want to grab the following:

www-apps/ikiwiki


On Tue, Dec 24, 2013 at 3:11 AM, Pavlos Ratis <dastergon@gentoo.org> wrote:

> I want to grab the following:
>
> dev-vcs/vcsh
> dev-vcs/mr
> app-admin/cronolog
> app-emulation/ganeti-instance-image
>
> If there isn't any objection, I'll add myself as maintainer.
>



--


--
Gentoo, If it moves, compile it!
My_overlay: https://github.com/aliceinwire/overlay
Mail: Alice Ferrazzi <alice.ferrazzi@gmail.com>
PGP: 0EE4 555E 3AAC B4A4 798D 9AC5 8E31 1808 C553 2D33
Re: Packages up for grabs [ In reply to ]
I already took over maintainership. If you want to help maintaining it,
feel free to ping me offlist and I'll add you, too.

Cheers,

Manuel

On 12/23/2013 05:37 PM, Manuel Rüger wrote:> On 12/23/2013 04:40 PM,
Pacho Ramos wrote:
>> Due tove lack of time:
>> www-apps/ikiwiki
>>
>>
>>
> I'll add myself if no one objects.
>
> Cheers,
>
> Manuel
>


On 12/25/2013 07:19 AM, Alice Ferrazzi wrote:
> i want to grab the following:
>
> www-apps/ikiwiki
>
>
> On Tue, Dec 24, 2013 at 3:11 AM, Pavlos Ratis <dastergon@gentoo.org
> <mailto:dastergon@gentoo.org>> wrote:
>
> I want to grab the following:
>
> dev-vcs/vcsh
> dev-vcs/mr
> app-admin/cronolog
> app-emulation/ganeti-instance-image
>
> If there isn't any objection, I'll add myself as maintainer.
>
>
>
>
> --
>
>
> --
> Gentoo, If it moves, compile it!
> My_overlay: https://github.com/aliceinwire/overlay
> Mail: Alice Ferrazzi <alice.ferrazzi@gmail.com
> <mailto:alice.ferrazzi@gmail.com>>
> PGP: 0EE4 555E 3AAC B4A4 798D 9AC5 8E31 1808 C553 2D33
Re: Packages up for grabs [ In reply to ]
On 17:35 Mon 14 Apr , Alice Ferrazzi wrote:
> ...
> i will like to take
> net-wireless/wpa_supplicant

You're listed in metadata.xml of 8 packages already, as proxy. Have you considered trying
to find a mentor and becoming a full developer?

--
Panagiotis Christopoulos ( pchrist )
( Gentoo Lisp Project )
Re: Packages up for grabs [ In reply to ]
>There are still other Gentoo Developers listed in some of them, for
>example owncloud and wpa_supplicant; are they really up for grabs?

About the package, i use manly only wpa_supplicant for connecting so i
would like to help give support if there is the necessity but as Tom
Wijsman said there is already 2 maintainer.

> You're listed in metadata.xml of 8 packages already, as proxy. Have you considered trying
> to find a mentor and becoming a full developer?

I'm interested in becoming full developer.

On Mon, Apr 14, 2014 at 6:12 PM, Panagiotis Christopoulos
<pchrist@gentoo.org> wrote:
> On 17:35 Mon 14 Apr , Alice Ferrazzi wrote:
>> ...
>> i will like to take
>> net-wireless/wpa_supplicant
>
> You're listed in metadata.xml of 8 packages already, as proxy. Have you considered trying
> to find a mentor and becoming a full developer?
>
> --
> Panagiotis Christopoulos ( pchrist )
> ( Gentoo Lisp Project )



--
アリス フェッラッシィ
Alice Ferrazzi

Gentoo, If it moves, compile it!
My_overlay: https://github.com/aliceinwire/overlay
Mail: Alice Ferrazzi <alice.ferrazzi@gmail.com>
PGP: 0EE4 555E 3AAC B4A4 798D 9AC5 8E31 1808 C553 2D33
Re: Packages up for grabs [ In reply to ]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 04/14/2014 04:35 AM, Alice Ferrazzi wrote:
>> There are a list of packages up for grabs. I cannnot test anymore some of
>> them, or i stopped use them.
>>
>> app-text/fbreader
>> dev-libs/liblinebreak
>> net-wireless/madwimax
>> net-wireless/wimax-tools
>> net-wireless/wimax
>> net-wireless/wpa_supplicant
>> sys-fs/ocfs2-tools
>> www-apps/owncloud
>> www-apps/rutorrent
>
> i will like to take
> net-wireless/wpa_supplicant
>
wpa_supplicant has multiple active maintainers, but I don't mind you
helping as long as gurligebis doesn't.

- -Zero
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJTS91ZAAoJEKXdFCfdEflKYdkP/Ri8dqjpGi4EiNhdTnXRfgug
+JDgkRoqzy3ajNWWxwGK3dWgM2mbuZjUdN8x7G3SFJJu80irJv+y4PPVmbVVG2RE
GL1bS6YokgeM5PGXVinyDqX2/xXf3zFgbYopxoooFO9+nV/ZEsh3yJG8VM4Vbw4S
svYuuEQTFXTHAjY//TT4oO+Q6jobtWkjpBSV3O2uU4ltDKRvBdlwwkS96I5iYqAM
le6Kpj4NVxxFx44NHoqk0wKHeKNW4zh1Hngr1eZnWfxdIFbTExr9cJ9D6KPfDF+X
09ry4X2nd4ApzQY5iIrT1DgQVtGeXiPLn7BY/J4Sg/1Y2X8+iIZaGaxObk/niN20
tpgRJ4Mw7cj6dn7DqkxODjkeAB9aDdRAeknAdDGPcTVw8r90XLch8vgATeF2/vhE
9mbdmoO1Oh5XROKdhSS4cNRpx7rv1EDJSsUqb76s5+Wk29b8neMoWKHXXSRZDNHo
CcNzSHLG55e3vu33WLPq0dTjrVjO7Zoamp8hIKpBPnEgvIPvFKPz1EpKOTFkFt3N
WOMIecy1PzWRf76iKUV6j22tm5slm3sZuZJFOhGTMcA2gi4tdIhxE+YaygxtID7N
tp7/ONqf9FBgwt5xmYRGlBguyWGKMczDOc0PP+mhoi/wjycj8aNcR5PWFHGdu8Xw
e5Kbp8ZA5NWVWLzFmUCF
=rgsR
-----END PGP SIGNATURE-----
Re: Packages up for grabs [ In reply to ]
2014-04-14 15:06 GMT+02:00 Rick "Zero_Chaos" Farina <zerochaos@gentoo.org>:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 04/14/2014 04:35 AM, Alice Ferrazzi wrote:
>>> There are a list of packages up for grabs. I cannnot test anymore some of
>>> them, or i stopped use them.
>>>
>>> app-text/fbreader
>>> dev-libs/liblinebreak
>>> net-wireless/madwimax
>>> net-wireless/wimax-tools
>>> net-wireless/wimax
>>> net-wireless/wpa_supplicant
>>> sys-fs/ocfs2-tools
>>> www-apps/owncloud
>>> www-apps/rutorrent
>>
>> i will like to take
>> net-wireless/wpa_supplicant
>>
> wpa_supplicant has multiple active maintainers, but I don't mind you
> helping as long as gurligebis doesn't.

Fine with me :)
My usage of wpa_supplicant has been going down lately, so more eyes on
it would be a good thing :)

/GG

> - -Zero
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2.0.22 (GNU/Linux)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIcBAEBAgAGBQJTS91ZAAoJEKXdFCfdEflKYdkP/Ri8dqjpGi4EiNhdTnXRfgug
> +JDgkRoqzy3ajNWWxwGK3dWgM2mbuZjUdN8x7G3SFJJu80irJv+y4PPVmbVVG2RE
> GL1bS6YokgeM5PGXVinyDqX2/xXf3zFgbYopxoooFO9+nV/ZEsh3yJG8VM4Vbw4S
> svYuuEQTFXTHAjY//TT4oO+Q6jobtWkjpBSV3O2uU4ltDKRvBdlwwkS96I5iYqAM
> le6Kpj4NVxxFx44NHoqk0wKHeKNW4zh1Hngr1eZnWfxdIFbTExr9cJ9D6KPfDF+X
> 09ry4X2nd4ApzQY5iIrT1DgQVtGeXiPLn7BY/J4Sg/1Y2X8+iIZaGaxObk/niN20
> tpgRJ4Mw7cj6dn7DqkxODjkeAB9aDdRAeknAdDGPcTVw8r90XLch8vgATeF2/vhE
> 9mbdmoO1Oh5XROKdhSS4cNRpx7rv1EDJSsUqb76s5+Wk29b8neMoWKHXXSRZDNHo
> CcNzSHLG55e3vu33WLPq0dTjrVjO7Zoamp8hIKpBPnEgvIPvFKPz1EpKOTFkFt3N
> WOMIecy1PzWRf76iKUV6j22tm5slm3sZuZJFOhGTMcA2gi4tdIhxE+YaygxtID7N
> tp7/ONqf9FBgwt5xmYRGlBguyWGKMczDOc0PP+mhoi/wjycj8aNcR5PWFHGdu8Xw
> e5Kbp8ZA5NWVWLzFmUCF
> =rgsR
> -----END PGP SIGNATURE-----
>
Re: Packages up for grabs [ In reply to ]
On 05/28/2014 09:32 PM, Dirkjan Ochtman wrote:
> On Wed, May 28, 2014 at 10:28 PM, Markos Chandras <hwoarang@gentoo.org> wrote:
>> Indeed. I only use a subset of the dev-tools packages so those that I
>> don't use will be unmaintained in practice. I will add something to the
>> Staffing Needs wiki page but feel free to join the herd if you have any
>> interest in these packages.
>
> Perhaps it makes more sense to disband the herd and put all packages
> except the ones you use up for grabs?
>
> Cheers,
>
> Dirkjan
>
I suppose so. Let me have a look and see how many packages belong to
that herd and then I will see what to do.

--
Regards,
Markos Chandras
Re: Packages up for grabs [ In reply to ]
On Fri, 30 May 2014 23:07:55 +0100
Markos Chandras <hwoarang@gentoo.org> wrote:

> On 05/28/2014 09:32 PM, Dirkjan Ochtman wrote:
> > Perhaps it makes more sense to disband the herd and put all packages
> > except the ones you use up for grabs?

> I suppose so. Let me have a look and see how many packages belong to
> that herd and then I will see what to do.

You should maybe wait a few weeks. It wouldn't make sense to first
call on developers to join an existing structure, and to then
immediately tear it down leaving them to pick up the pieces.


jer
Re: Packages up for grabs [ In reply to ]
On 06/02/2014 02:03 PM, Jeroen Roovers wrote:
> On Fri, 30 May 2014 23:07:55 +0100
> Markos Chandras <hwoarang@gentoo.org> wrote:
>
>> On 05/28/2014 09:32 PM, Dirkjan Ochtman wrote:
>>> Perhaps it makes more sense to disband the herd and put all packages
>>> except the ones you use up for grabs?
>
>> I suppose so. Let me have a look and see how many packages belong to
>> that herd and then I will see what to do.
>
> You should maybe wait a few weeks. It wouldn't make sense to first
> call on developers to join an existing structure, and to then
> immediately tear it down leaving them to pick up the pieces.
>
>
> jer
>

Yes definitely. I wasn't planning on doing this overnight.

--
Regards,
Markos Chandras
Re: Packages up for grabs [ In reply to ]
On 06/ 3/14 02:50 AM, Parker Schmitt wrote:
> I think we need to keep the opencl stuff. In a few weeks I'll have
> time to help.
I work for PathScale and can probably take on

dev-lang/ekopath

path64 - while I'd like it to continue - it could(should?) be retired
---------
I'd need someone to help proxy the version bumps on ekopath though. Join
#pathscale on freenode or email me offlist to coordinate please.

Thanks
Re: Packages up for grabs [ In reply to ]
On Monday 02 June 2014 14:50:56 Parker Schmitt wrote:
> I think we need to keep the opencl stuff. In a few weeks I'll have time to
> help.
>
>
> On Mon, Mar 17, 2014 at 2:10 AM, Kacper Kowalik <xarthisius@gentoo.org>
>
> wrote:
> > Hi All!
> > There's a bunch packages that I'm officially maintaining but due to the
> > lack of time I'm unable to do that properly. I'd be grateful if you
> > could step in for me. I'll remove myself from metadata in the following
> > packages within 7 days:
> >
> > # OpenCL
> > app-admin/eselect-opencl
> > dev-util/intel-ocl-sdk
> > virtual/opencl

I have some interest in those too, so if anyone wants to help out I can at
least be a commit proxy

Have fun,

Patrick
Re: Packages up for grabs [ In reply to ]
On 06/04/2014 12:44 PM, Jesus Rivero (Neurogeek) wrote:
> Due to a mix of "not currently using them", "not much time available"
> and "Upstreams has a completely different concept of packaging", the
> following packages have been marked maintainer-needed and are up for grabs:
>
> net-libs/ptlib
> net-libs/opal
> net-voip/ekiga
> app-admin/chef
> app-admin/chef-expander
> app-admin/chef-server
> app-admin/chef-server-api
> app-admin/chef-server-webui
> app-admin/chef-solr
>
> Hoping these poor souls find a new, loving, home.
>
> Cheers,
>
> --
> Jesus Rivero (Neurogeek)
> Gentoo Developer
I've thought about packaging chef (I do cookbooks for a living more or
less), but each time I do I back away because of ruby and how much fun
it is to package (the only sane way would be gems I think...). I really
want to use it at home, but can't (wont) if it's not a system package,
so stick with puppet I will (both maintaining it and using it at home :D)

--
-- Matthew Thode (prometheanfire)

1 2 3 4 5 6 7  View All