Mailing List Archive

Re: svn commit: r1902005 - in /httpd/httpd/trunk/modules/http2: h2_c1.c h2_config.c h2_config.h h2_mplx.c h2_mplx.h h2_workers.c h2_workers.h mod_http2.c
On 6/17/22 11:24 AM, icing@apache.org wrote:
> Author: icing
> Date: Fri Jun 17 09:24:57 2022
> New Revision: 1902005
>
> URL: http://svn.apache.org/viewvc?rev=1902005&view=rev
> Log:
> *) mod_http2: new implementation of h2 worker pool.
> - O(1) cost at registration of connection processing producers
> - no limit on registered producers
> - join of ongoing work on unregister
> - callbacks to unlink dependencies into other h2 code
> - memory cleanup on workers deactivation (on idle timeouts)
> - idle_limit as apr_time_t instead of seconds
>
>
> Modified:
> httpd/httpd/trunk/modules/http2/h2_c1.c
> httpd/httpd/trunk/modules/http2/h2_config.c
> httpd/httpd/trunk/modules/http2/h2_config.h
> httpd/httpd/trunk/modules/http2/h2_mplx.c
> httpd/httpd/trunk/modules/http2/h2_mplx.h
> httpd/httpd/trunk/modules/http2/h2_workers.c
> httpd/httpd/trunk/modules/http2/h2_workers.h
> httpd/httpd/trunk/modules/http2/mod_http2.c
>
> Modified: httpd/httpd/trunk/modules/http2/h2_c1.c
> URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http2/h2_c1.c?rev=1902005&r1=1902004&r2=1902005&view=diff
> ==============================================================================
> --- httpd/httpd/trunk/modules/http2/h2_c1.c (original)
> +++ httpd/httpd/trunk/modules/http2/h2_c1.c Fri Jun 17 09:24:57 2022
> @@ -56,11 +56,8 @@ apr_status_t h2_c1_child_init(apr_pool_t
> {
> apr_status_t status = APR_SUCCESS;
> int minw, maxw;
> - int max_threads_per_child = 0;
> - int idle_secs = 0;
> + apr_time_t idle_limit;
>
> - ap_mpm_query(AP_MPMQ_MAX_THREADS, &max_threads_per_child);
> -
> status = ap_mpm_query(AP_MPMQ_IS_ASYNC, &async_mpm);
> if (status != APR_SUCCESS) {
> /* some MPMs do not implemnent this */
> @@ -70,12 +67,8 @@ apr_status_t h2_c1_child_init(apr_pool_t
>
> h2_config_init(pool);
>
> - h2_get_num_workers(s, &minw, &maxw);
> - idle_secs = h2_config_sgeti(s, H2_CONF_MAX_WORKER_IDLE_SECS);
> - ap_log_error(APLOG_MARK, APLOG_TRACE3, 0, s,
> - "h2_workers: min=%d max=%d, mthrpchild=%d, idle_secs=%d",
> - minw, maxw, max_threads_per_child, idle_secs);
> - workers = h2_workers_create(s, pool, minw, maxw, idle_secs);
> + h2_get_workers_config(s, &minw, &maxw, &idle_limit);
> + workers = h2_workers_create(s, pool, maxw, minw, idle_limit);

Shouldn't that be

workers = h2_workers_create(s, pool, minw, maxw, idle_limit);

instead?

>
> h2_c_logio_add_bytes_in = APR_RETRIEVE_OPTIONAL_FN(ap_logio_add_bytes_in);
> h2_c_logio_add_bytes_out = APR_RETRIEVE_OPTIONAL_FN(ap_logio_add_bytes_out);
>

Regards

Rüdiger
Re: svn commit: r1902005 - in /httpd/httpd/trunk/modules/http2: h2_c1.c h2_config.c h2_config.h h2_mplx.c h2_mplx.h h2_workers.c h2_workers.h mod_http2.c [ In reply to ]
On 6/20/22 8:53 AM, Ruediger Pluem wrote:
>
>
> On 6/17/22 11:24 AM, icing@apache.org wrote:
>> Author: icing
>> Date: Fri Jun 17 09:24:57 2022
>> New Revision: 1902005
>>
>> URL: http://svn.apache.org/viewvc?rev=1902005&view=rev
>> Log:
>> *) mod_http2: new implementation of h2 worker pool.
>> - O(1) cost at registration of connection processing producers
>> - no limit on registered producers
>> - join of ongoing work on unregister
>> - callbacks to unlink dependencies into other h2 code
>> - memory cleanup on workers deactivation (on idle timeouts)
>> - idle_limit as apr_time_t instead of seconds
>>
>>
>> Modified:
>> httpd/httpd/trunk/modules/http2/h2_c1.c
>> httpd/httpd/trunk/modules/http2/h2_config.c
>> httpd/httpd/trunk/modules/http2/h2_config.h
>> httpd/httpd/trunk/modules/http2/h2_mplx.c
>> httpd/httpd/trunk/modules/http2/h2_mplx.h
>> httpd/httpd/trunk/modules/http2/h2_workers.c
>> httpd/httpd/trunk/modules/http2/h2_workers.h
>> httpd/httpd/trunk/modules/http2/mod_http2.c
>>
>> Modified: httpd/httpd/trunk/modules/http2/h2_c1.c
>> URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http2/h2_c1.c?rev=1902005&r1=1902004&r2=1902005&view=diff
>> ==============================================================================
>> --- httpd/httpd/trunk/modules/http2/h2_c1.c (original)
>> +++ httpd/httpd/trunk/modules/http2/h2_c1.c Fri Jun 17 09:24:57 2022
>> @@ -56,11 +56,8 @@ apr_status_t h2_c1_child_init(apr_pool_t
>> {
>> apr_status_t status = APR_SUCCESS;
>> int minw, maxw;
>> - int max_threads_per_child = 0;
>> - int idle_secs = 0;
>> + apr_time_t idle_limit;
>>
>> - ap_mpm_query(AP_MPMQ_MAX_THREADS, &max_threads_per_child);
>> -
>> status = ap_mpm_query(AP_MPMQ_IS_ASYNC, &async_mpm);
>> if (status != APR_SUCCESS) {
>> /* some MPMs do not implemnent this */
>> @@ -70,12 +67,8 @@ apr_status_t h2_c1_child_init(apr_pool_t
>>
>> h2_config_init(pool);
>>
>> - h2_get_num_workers(s, &minw, &maxw);
>> - idle_secs = h2_config_sgeti(s, H2_CONF_MAX_WORKER_IDLE_SECS);
>> - ap_log_error(APLOG_MARK, APLOG_TRACE3, 0, s,
>> - "h2_workers: min=%d max=%d, mthrpchild=%d, idle_secs=%d",
>> - minw, maxw, max_threads_per_child, idle_secs);
>> - workers = h2_workers_create(s, pool, minw, maxw, idle_secs);
>> + h2_get_workers_config(s, &minw, &maxw, &idle_limit);
>> + workers = h2_workers_create(s, pool, maxw, minw, idle_limit);
>
> Shouldn't that be
>
> workers = h2_workers_create(s, pool, minw, maxw, idle_limit);
>
> instead?

Scratch that. I just noticed that the order of the parameters in h2_workers_create was changed as well.

Regards

Rüdiger
Re: svn commit: r1902005 - in /httpd/httpd/trunk/modules/http2: h2_c1.c h2_config.c h2_config.h h2_mplx.c h2_mplx.h h2_workers.c h2_workers.h mod_http2.c [ In reply to ]
> Am 20.06.2022 um 08:59 schrieb Ruediger Pluem <rpluem@apache.org>:
>
>
>
> On 6/20/22 8:53 AM, Ruediger Pluem wrote:
>>
>>
>> On 6/17/22 11:24 AM, icing@apache.org wrote:
>>> Author: icing
>>> Date: Fri Jun 17 09:24:57 2022
>>> New Revision: 1902005
>>>
>>> URL: http://svn.apache.org/viewvc?rev=1902005&view=rev
>>> Log:
>>> *) mod_http2: new implementation of h2 worker pool.
>>> - O(1) cost at registration of connection processing producers
>>> - no limit on registered producers
>>> - join of ongoing work on unregister
>>> - callbacks to unlink dependencies into other h2 code
>>> - memory cleanup on workers deactivation (on idle timeouts)
>>> - idle_limit as apr_time_t instead of seconds
>>>
>>>
>>> Modified:
>>> httpd/httpd/trunk/modules/http2/h2_c1.c
>>> httpd/httpd/trunk/modules/http2/h2_config.c
>>> httpd/httpd/trunk/modules/http2/h2_config.h
>>> httpd/httpd/trunk/modules/http2/h2_mplx.c
>>> httpd/httpd/trunk/modules/http2/h2_mplx.h
>>> httpd/httpd/trunk/modules/http2/h2_workers.c
>>> httpd/httpd/trunk/modules/http2/h2_workers.h
>>> httpd/httpd/trunk/modules/http2/mod_http2.c
>>>
>>> Modified: httpd/httpd/trunk/modules/http2/h2_c1.c
>>> URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http2/h2_c1.c?rev=1902005&r1=1902004&r2=1902005&view=diff
>>> ==============================================================================
>>> --- httpd/httpd/trunk/modules/http2/h2_c1.c (original)
>>> +++ httpd/httpd/trunk/modules/http2/h2_c1.c Fri Jun 17 09:24:57 2022
>>> @@ -56,11 +56,8 @@ apr_status_t h2_c1_child_init(apr_pool_t
>>> {
>>> apr_status_t status = APR_SUCCESS;
>>> int minw, maxw;
>>> - int max_threads_per_child = 0;
>>> - int idle_secs = 0;
>>> + apr_time_t idle_limit;
>>>
>>> - ap_mpm_query(AP_MPMQ_MAX_THREADS, &max_threads_per_child);
>>> -
>>> status = ap_mpm_query(AP_MPMQ_IS_ASYNC, &async_mpm);
>>> if (status != APR_SUCCESS) {
>>> /* some MPMs do not implemnent this */
>>> @@ -70,12 +67,8 @@ apr_status_t h2_c1_child_init(apr_pool_t
>>>
>>> h2_config_init(pool);
>>>
>>> - h2_get_num_workers(s, &minw, &maxw);
>>> - idle_secs = h2_config_sgeti(s, H2_CONF_MAX_WORKER_IDLE_SECS);
>>> - ap_log_error(APLOG_MARK, APLOG_TRACE3, 0, s,
>>> - "h2_workers: min=%d max=%d, mthrpchild=%d, idle_secs=%d",
>>> - minw, maxw, max_threads_per_child, idle_secs);
>>> - workers = h2_workers_create(s, pool, minw, maxw, idle_secs);
>>> + h2_get_workers_config(s, &minw, &maxw, &idle_limit);
>>> + workers = h2_workers_create(s, pool, maxw, minw, idle_limit);
>>
>> Shouldn't that be
>>
>> workers = h2_workers_create(s, pool, minw, maxw, idle_limit);
>>
>> instead?
>
> Scratch that. I just noticed that the order of the parameters in h2_workers_create was changed as well.

You had me scrambling there for a second. ;)

Yeah, might be more "natural" to have them the other way around. No strong feelings.

Cheers,
Stefan

>
> Regards
>
> Rüdiger
Re: svn commit: r1902005 - in /httpd/httpd/trunk/modules/http2: h2_c1.c h2_config.c h2_config.h h2_mplx.c h2_mplx.h h2_workers.c h2_workers.h mod_http2.c [ In reply to ]
> Am 20.06.2022 um 09:01 schrieb Stefan Eissing <stefan@eissing.org>:
>
>
>
>> Am 20.06.2022 um 08:59 schrieb Ruediger Pluem <rpluem@apache.org>:
>>
>>
>>
>> On 6/20/22 8:53 AM, Ruediger Pluem wrote:
>>>
>>>
>>> On 6/17/22 11:24 AM, icing@apache.org wrote:
>>>> Author: icing
>>>> Date: Fri Jun 17 09:24:57 2022
>>>> New Revision: 1902005
>>>>
>>>> URL: http://svn.apache.org/viewvc?rev=1902005&view=rev
>>>> Log:
>>>> *) mod_http2: new implementation of h2 worker pool.
>>>> - O(1) cost at registration of connection processing producers
>>>> - no limit on registered producers
>>>> - join of ongoing work on unregister
>>>> - callbacks to unlink dependencies into other h2 code
>>>> - memory cleanup on workers deactivation (on idle timeouts)
>>>> - idle_limit as apr_time_t instead of seconds
>>>>
>>>>
>>>> Modified:
>>>> httpd/httpd/trunk/modules/http2/h2_c1.c
>>>> httpd/httpd/trunk/modules/http2/h2_config.c
>>>> httpd/httpd/trunk/modules/http2/h2_config.h
>>>> httpd/httpd/trunk/modules/http2/h2_mplx.c
>>>> httpd/httpd/trunk/modules/http2/h2_mplx.h
>>>> httpd/httpd/trunk/modules/http2/h2_workers.c
>>>> httpd/httpd/trunk/modules/http2/h2_workers.h
>>>> httpd/httpd/trunk/modules/http2/mod_http2.c
>>>>
>>>> Modified: httpd/httpd/trunk/modules/http2/h2_c1.c
>>>> URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http2/h2_c1.c?rev=1902005&r1=1902004&r2=1902005&view=diff
>>>> ==============================================================================
>>>> --- httpd/httpd/trunk/modules/http2/h2_c1.c (original)
>>>> +++ httpd/httpd/trunk/modules/http2/h2_c1.c Fri Jun 17 09:24:57 2022
>>>> @@ -56,11 +56,8 @@ apr_status_t h2_c1_child_init(apr_pool_t
>>>> {
>>>> apr_status_t status = APR_SUCCESS;
>>>> int minw, maxw;
>>>> - int max_threads_per_child = 0;
>>>> - int idle_secs = 0;
>>>> + apr_time_t idle_limit;
>>>>
>>>> - ap_mpm_query(AP_MPMQ_MAX_THREADS, &max_threads_per_child);
>>>> -
>>>> status = ap_mpm_query(AP_MPMQ_IS_ASYNC, &async_mpm);
>>>> if (status != APR_SUCCESS) {
>>>> /* some MPMs do not implemnent this */
>>>> @@ -70,12 +67,8 @@ apr_status_t h2_c1_child_init(apr_pool_t
>>>>
>>>> h2_config_init(pool);
>>>>
>>>> - h2_get_num_workers(s, &minw, &maxw);
>>>> - idle_secs = h2_config_sgeti(s, H2_CONF_MAX_WORKER_IDLE_SECS);
>>>> - ap_log_error(APLOG_MARK, APLOG_TRACE3, 0, s,
>>>> - "h2_workers: min=%d max=%d, mthrpchild=%d, idle_secs=%d",
>>>> - minw, maxw, max_threads_per_child, idle_secs);
>>>> - workers = h2_workers_create(s, pool, minw, maxw, idle_secs);
>>>> + h2_get_workers_config(s, &minw, &maxw, &idle_limit);
>>>> + workers = h2_workers_create(s, pool, maxw, minw, idle_limit);
>>>
>>> Shouldn't that be
>>>
>>> workers = h2_workers_create(s, pool, minw, maxw, idle_limit);
>>>
>>> instead?
>>
>> Scratch that. I just noticed that the order of the parameters in h2_workers_create was changed as well.
>
> You had me scrambling there for a second. ;)
>
> Yeah, might be more "natural" to have them the other way around. No strong feelings.

As an explainer: I played around with different parameters here and ended up removing them again. In the interim versions, the order got changed.

>
> Cheers,
> Stefan
>
>>
>> Regards
>>
>> Rüdiger
Re: svn commit: r1902005 - in /httpd/httpd/trunk/modules/http2: h2_c1.c h2_config.c h2_config.h h2_mplx.c h2_mplx.h h2_workers.c h2_workers.h mod_http2.c [ In reply to ]
On 6/17/22 11:24 AM, icing@apache.org wrote:
> Author: icing
> Date: Fri Jun 17 09:24:57 2022
> New Revision: 1902005
>
> URL: http://svn.apache.org/viewvc?rev=1902005&view=rev
> Log:
> *) mod_http2: new implementation of h2 worker pool.
> - O(1) cost at registration of connection processing producers
> - no limit on registered producers
> - join of ongoing work on unregister
> - callbacks to unlink dependencies into other h2 code
> - memory cleanup on workers deactivation (on idle timeouts)
> - idle_limit as apr_time_t instead of seconds
>
>
> Modified:
> httpd/httpd/trunk/modules/http2/h2_c1.c
> httpd/httpd/trunk/modules/http2/h2_config.c
> httpd/httpd/trunk/modules/http2/h2_config.h
> httpd/httpd/trunk/modules/http2/h2_mplx.c
> httpd/httpd/trunk/modules/http2/h2_mplx.h
> httpd/httpd/trunk/modules/http2/h2_workers.c
> httpd/httpd/trunk/modules/http2/h2_workers.h
> httpd/httpd/trunk/modules/http2/mod_http2.c

> Modified: httpd/httpd/trunk/modules/http2/h2_workers.c
> URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http2/h2_workers.c?rev=1902005&r1=1902004&r2=1902005&view=diff
> ==============================================================================
> --- httpd/httpd/trunk/modules/http2/h2_workers.c (original)
> +++ httpd/httpd/trunk/modules/http2/h2_workers.c Fri Jun 17 09:24:57 2022

>
> @@ -347,37 +367,47 @@ static apr_status_t workers_pool_cleanup
> int n, wait_sec = 5;

Should n be initialized to 0 to avoid

‘n’ may be used uninitialized in this function

?

>
> ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, workers->s,
> - "h2_workers: cleanup %d workers idling",
> - (int)apr_atomic_read32(&workers->worker_count));
> - workers_abort_idle(workers);
> + "h2_workers: cleanup %d workers (%d idle)",
> + workers->active_slots, workers->idle_slots);
> + apr_thread_mutex_lock(workers->lock);
> + workers->shutdown = 1;
> + workers->aborted = 1;
> + wake_all_idles(workers);
> + apr_thread_mutex_unlock(workers->lock);
>
> /* wait for all the workers to become zombies and join them.
> * this gets called after the mpm shuts down and all connections
> * have either been handled (graceful) or we are forced exiting
> * (ungrateful). Either way, we show limited patience. */
> - apr_thread_mutex_lock(workers->lock);
> end = apr_time_now() + apr_time_from_sec(wait_sec);
> - while ((n = apr_atomic_read32(&workers->worker_count)) > 0
> - && apr_time_now() < end) {
> + while (apr_time_now() < end) {
> + apr_thread_mutex_lock(workers->lock);
> + if (!(n = workers->active_slots)) {
> + apr_thread_mutex_unlock(workers->lock);
> + break;
> + }
> + wake_all_idles(workers);
> rv = apr_thread_cond_timedwait(workers->all_done, workers->lock, timeout);
> + apr_thread_mutex_unlock(workers->lock);
> +
> if (APR_TIMEUP == rv) {
> ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, workers->s,
> - APLOGNO(10290) "h2_workers: waiting for idle workers to close, "
> - "still seeing %d workers living",
> - apr_atomic_read32(&workers->worker_count));
> - continue;
> + APLOGNO(10290) "h2_workers: waiting for workers to close, "
> + "still seeing %d workers (%d idle) living",
> + workers->active_slots, workers->idle_slots);
> }
> }
> if (n) {
> ap_log_error(APLOG_MARK, APLOG_WARNING, 0, workers->s,
> - APLOGNO(10291) "h2_workers: cleanup, %d idle workers "
> + APLOGNO(10291) "h2_workers: cleanup, %d workers (%d idle) "
> "did not exit after %d seconds.",
> - n, wait_sec);
> + n, workers->idle_slots, wait_sec);
> }
> - apr_thread_mutex_unlock(workers->lock);
> ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, workers->s,
> "h2_workers: cleanup all workers terminated");
> + apr_thread_mutex_lock(workers->lock);
> join_zombies(workers);
> + apr_thread_mutex_unlock(workers->lock);
> ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, workers->s,
> "h2_workers: cleanup zombie workers joined");
>


Regards

Rüdiger
Re: svn commit: r1902005 - in /httpd/httpd/trunk/modules/http2: h2_c1.c h2_config.c h2_config.h h2_mplx.c h2_mplx.h h2_workers.c h2_workers.h mod_http2.c [ In reply to ]
On 6/17/22 11:24 AM, icing@apache.org wrote:
> Author: icing
> Date: Fri Jun 17 09:24:57 2022
> New Revision: 1902005
>
> URL: http://svn.apache.org/viewvc?rev=1902005&view=rev
> Log:
> *) mod_http2: new implementation of h2 worker pool.
> - O(1) cost at registration of connection processing producers
> - no limit on registered producers
> - join of ongoing work on unregister
> - callbacks to unlink dependencies into other h2 code
> - memory cleanup on workers deactivation (on idle timeouts)
> - idle_limit as apr_time_t instead of seconds
>
>
> Modified:
> httpd/httpd/trunk/modules/http2/h2_c1.c
> httpd/httpd/trunk/modules/http2/h2_config.c
> httpd/httpd/trunk/modules/http2/h2_config.h
> httpd/httpd/trunk/modules/http2/h2_mplx.c
> httpd/httpd/trunk/modules/http2/h2_mplx.h
> httpd/httpd/trunk/modules/http2/h2_workers.c
> httpd/httpd/trunk/modules/http2/h2_workers.h
> httpd/httpd/trunk/modules/http2/mod_http2.c

> Modified: httpd/httpd/trunk/modules/http2/h2_workers.c
> URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http2/h2_workers.c?rev=1902005&r1=1902004&r2=1902005&view=diff
> ==============================================================================
> --- httpd/httpd/trunk/modules/http2/h2_workers.c (original)
> +++ httpd/httpd/trunk/modules/http2/h2_workers.c Fri Jun 17 09:24:57 2022
>
> @@ -385,13 +415,13 @@ static apr_status_t workers_pool_cleanup
> }
>
> h2_workers *h2_workers_create(server_rec *s, apr_pool_t *pchild,
> - int min_workers, int max_workers,
> - int idle_secs)
> + int max_slots, int min_active, apr_time_t idle_limit)
> {
> apr_status_t rv;
> h2_workers *workers;
> apr_pool_t *pool;
> - int i, n;
> + apr_allocator_t *allocator;
> + int i, locked = 0;
>
> ap_assert(s);
> ap_assert(pchild);
> @@ -401,7 +431,16 @@ h2_workers *h2_workers_create(server_rec
> * guarded by our lock. Without this pool, all subpool creations would
> * happen on the pool handed to us, which we do not guard.
> */
> - apr_pool_create(&pool, pchild);
> + rv = apr_allocator_create(&allocator);
> + if (rv != APR_SUCCESS) {
> + goto cleanup;
> + }
> + rv = apr_pool_create_ex(&pool, pchild, NULL, allocator);
> + if (rv != APR_SUCCESS) {
> + apr_allocator_destroy(allocator);
> + goto cleanup;
> + }
> + apr_allocator_owner_set(allocator, pool);
> apr_pool_tag(pool, "h2_workers");
> workers = apr_pcalloc(pool, sizeof(h2_workers));
> if (!workers) {
> @@ -410,19 +449,23 @@ h2_workers *h2_workers_create(server_rec
>
> workers->s = s;
> workers->pool = pool;
> - workers->min_workers = min_workers;
> - workers->max_workers = max_workers;
> - workers->max_idle_secs = (idle_secs > 0)? idle_secs : 10;
> + workers->min_active = min_active;
> + workers->max_slots = max_slots;
> + workers->idle_limit = (idle_limit > 0)? idle_limit : apr_time_from_sec(10);
> + workers->dynamic = (workers->min_active < workers->max_slots);
> +
> + ap_log_error(APLOG_MARK, APLOG_INFO, 0, workers->s,
> + "h2_workers: created with min=%d max=%d idle_ms=%d",
> + workers->min_active, workers->max_slots,
> + (int)apr_time_as_msec(idle_limit));
> +
> + APR_RING_INIT(&workers->idle, h2_slot, link);
> + APR_RING_INIT(&workers->busy, h2_slot, link);
> + APR_RING_INIT(&workers->free, h2_slot, link);
> + APR_RING_INIT(&workers->zombie, h2_slot, link);
>
> - ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, workers->s,
> - "h2_workers: created with min=%d max=%d idle_timeout=%d sec",
> - workers->min_workers, workers->max_workers,
> - (int)workers->max_idle_secs);
> - /* FIXME: the fifo set we use here has limited capacity. Once the
> - * set is full, connections with new requests do a wait.
> - */
> - rv = h2_fifo_set_create(&workers->mplxs, pool, 16 * 1024);
> - if (rv != APR_SUCCESS) goto cleanup;
> + APR_RING_INIT(&workers->prod_active, ap_conn_producer_t, link);
> + APR_RING_INIT(&workers->prod_idle, ap_conn_producer_t, link);
>
> rv = apr_threadattr_create(&workers->thread_attr, workers->pool);
> if (rv != APR_SUCCESS) goto cleanup;
> @@ -441,32 +484,35 @@ h2_workers *h2_workers_create(server_rec
> if (rv != APR_SUCCESS) goto cleanup;
> rv = apr_thread_cond_create(&workers->all_done, workers->pool);
> if (rv != APR_SUCCESS) goto cleanup;
> + rv = apr_thread_cond_create(&workers->prod_done, workers->pool);
> + if (rv != APR_SUCCESS) goto cleanup;
>
> - n = workers->nslots = workers->max_workers;
> - workers->slots = apr_pcalloc(workers->pool, n * sizeof(h2_slot));
> - if (workers->slots == NULL) {
> - n = workers->nslots = 0;
> - rv = APR_ENOMEM;
> - goto cleanup;
> - }
> - for (i = 0; i < n; ++i) {
> + apr_thread_mutex_lock(workers->lock);
> + locked = 1;
> +
> + /* create the slots and put them on the free list */
> + workers->slots = apr_pcalloc(workers->pool, workers->max_slots * sizeof(h2_slot));
> +
> + for (i = 0; i < workers->max_slots; ++i) {
> workers->slots[i].id = i;
> - }
> - /* we activate all for now, TODO: support min_workers again.
> - * do this in reverse for vanity reasons so slot 0 will most
> - * likely be at head of idle queue. */
> - n = workers->min_workers;
> - for (i = n-1; i >= 0; --i) {
> - rv = activate_slot(workers, &workers->slots[i]);
> + workers->slots[i].state = H2_SLOT_FREE;
> + workers->slots[i].workers = workers;
> + APR_RING_ELEM_INIT(&workers->slots[i], link);
> + APR_RING_INSERT_TAIL(&workers->free, &workers->slots[i], h2_slot, link);
> + rv = apr_thread_cond_create(&workers->slots[i].more_work, workers->pool);
> if (rv != APR_SUCCESS) goto cleanup;
> }
> - /* the rest of the slots go on the free list */
> - for(i = n; i < workers->nslots; ++i) {
> - push_slot(&workers->free, &workers->slots[i]);
> +
> + /* activate the min amount of workers */
> + for (i = 0; i < workers->min_active; ++i) {
> + rv = activate_slot(workers);
> + if (rv != APR_SUCCESS) goto cleanup;
> }
> - workers->dynamic = (workers->worker_count < workers->max_workers);
>
> cleanup:
> + if (locked) {
> + apr_thread_mutex_unlock(workers->lock);
> + }
> if (rv == APR_SUCCESS) {
> /* Stop/join the workers threads when the MPM child exits (pchild is
> * destroyed), and as a pre_cleanup of pchild thus before the threads
> @@ -476,24 +522,84 @@ cleanup:
> apr_pool_pre_cleanup_register(pchild, workers, workers_pool_cleanup);
> return workers;
> }
> + ap_log_error(APLOG_MARK, APLOG_DEBUG, rv, workers->s,
> + "h2_workers: errors initializing");
> return NULL;
> }

Open style question with an ask for opinions: Would it be better to use

pool instead of workers->pool
min_active instead of workers->min_active
max_slots instead of workers->max_slots

?

Would that make the code more

- readable
- faster

or is the current approach better?


Regards

Rüdiger
Re: svn commit: r1902005 - in /httpd/httpd/trunk/modules/http2: h2_c1.c h2_config.c h2_config.h h2_mplx.c h2_mplx.h h2_workers.c h2_workers.h mod_http2.c [ In reply to ]
> Am 20.06.2022 um 09:08 schrieb Ruediger Pluem <rpluem@apache.org>:
>
>
>
> On 6/17/22 11:24 AM, icing@apache.org wrote:
>> Author: icing
>> Date: Fri Jun 17 09:24:57 2022
>> New Revision: 1902005
>>
>> URL: http://svn.apache.org/viewvc?rev=1902005&view=rev
>> Log:
>> *) mod_http2: new implementation of h2 worker pool.
>> - O(1) cost at registration of connection processing producers
>> - no limit on registered producers
>> - join of ongoing work on unregister
>> - callbacks to unlink dependencies into other h2 code
>> - memory cleanup on workers deactivation (on idle timeouts)
>> - idle_limit as apr_time_t instead of seconds
>>
>>
>> Modified:
>> httpd/httpd/trunk/modules/http2/h2_c1.c
>> httpd/httpd/trunk/modules/http2/h2_config.c
>> httpd/httpd/trunk/modules/http2/h2_config.h
>> httpd/httpd/trunk/modules/http2/h2_mplx.c
>> httpd/httpd/trunk/modules/http2/h2_mplx.h
>> httpd/httpd/trunk/modules/http2/h2_workers.c
>> httpd/httpd/trunk/modules/http2/h2_workers.h
>> httpd/httpd/trunk/modules/http2/mod_http2.c
>
>> Modified: httpd/httpd/trunk/modules/http2/h2_workers.c
>> URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http2/h2_workers.c?rev=1902005&r1=1902004&r2=1902005&view=diff
>> ==============================================================================
>> --- httpd/httpd/trunk/modules/http2/h2_workers.c (original)
>> +++ httpd/httpd/trunk/modules/http2/h2_workers.c Fri Jun 17 09:24:57 2022
>
>>
>> @@ -347,37 +367,47 @@ static apr_status_t workers_pool_cleanup
>> int n, wait_sec = 5;
>
> Should n be initialized to 0 to avoid
>
> ‘n’ may be used uninitialized in this function
>
> ?

Yes. good catch. Fixed in r1902082

>>
>> ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, workers->s,
>> - "h2_workers: cleanup %d workers idling",
>> - (int)apr_atomic_read32(&workers->worker_count));
>> - workers_abort_idle(workers);
>> + "h2_workers: cleanup %d workers (%d idle)",
>> + workers->active_slots, workers->idle_slots);
>> + apr_thread_mutex_lock(workers->lock);
>> + workers->shutdown = 1;
>> + workers->aborted = 1;
>> + wake_all_idles(workers);
>> + apr_thread_mutex_unlock(workers->lock);
>>
>> /* wait for all the workers to become zombies and join them.
>> * this gets called after the mpm shuts down and all connections
>> * have either been handled (graceful) or we are forced exiting
>> * (ungrateful). Either way, we show limited patience. */
>> - apr_thread_mutex_lock(workers->lock);
>> end = apr_time_now() + apr_time_from_sec(wait_sec);
>> - while ((n = apr_atomic_read32(&workers->worker_count)) > 0
>> - && apr_time_now() < end) {
>> + while (apr_time_now() < end) {
>> + apr_thread_mutex_lock(workers->lock);
>> + if (!(n = workers->active_slots)) {
>> + apr_thread_mutex_unlock(workers->lock);
>> + break;
>> + }
>> + wake_all_idles(workers);
>> rv = apr_thread_cond_timedwait(workers->all_done, workers->lock, timeout);
>> + apr_thread_mutex_unlock(workers->lock);
>> +
>> if (APR_TIMEUP == rv) {
>> ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, workers->s,
>> - APLOGNO(10290) "h2_workers: waiting for idle workers to close, "
>> - "still seeing %d workers living",
>> - apr_atomic_read32(&workers->worker_count));
>> - continue;
>> + APLOGNO(10290) "h2_workers: waiting for workers to close, "
>> + "still seeing %d workers (%d idle) living",
>> + workers->active_slots, workers->idle_slots);
>> }
>> }
>> if (n) {
>> ap_log_error(APLOG_MARK, APLOG_WARNING, 0, workers->s,
>> - APLOGNO(10291) "h2_workers: cleanup, %d idle workers "
>> + APLOGNO(10291) "h2_workers: cleanup, %d workers (%d idle) "
>> "did not exit after %d seconds.",
>> - n, wait_sec);
>> + n, workers->idle_slots, wait_sec);
>> }
>> - apr_thread_mutex_unlock(workers->lock);
>> ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, workers->s,
>> "h2_workers: cleanup all workers terminated");
>> + apr_thread_mutex_lock(workers->lock);
>> join_zombies(workers);
>> + apr_thread_mutex_unlock(workers->lock);
>> ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, workers->s,
>> "h2_workers: cleanup zombie workers joined");
>>
>
>
> Regards
>
> Rüdiger
Re: svn commit: r1902005 - in /httpd/httpd/trunk/modules/http2: h2_c1.c h2_config.c h2_config.h h2_mplx.c h2_mplx.h h2_workers.c h2_workers.h mod_http2.c [ In reply to ]
> Am 20.06.2022 um 09:21 schrieb Ruediger Pluem <rpluem@apache.org>:
>
>
>
> On 6/17/22 11:24 AM, icing@apache.org wrote:
>> Author: icing
>> Date: Fri Jun 17 09:24:57 2022
>> New Revision: 1902005
>>
>> URL: http://svn.apache.org/viewvc?rev=1902005&view=rev
>> Log:
>> *) mod_http2: new implementation of h2 worker pool.
>> - O(1) cost at registration of connection processing producers
>> - no limit on registered producers
>> - join of ongoing work on unregister
>> - callbacks to unlink dependencies into other h2 code
>> - memory cleanup on workers deactivation (on idle timeouts)
>> - idle_limit as apr_time_t instead of seconds
>>
>>
>> Modified:
>> httpd/httpd/trunk/modules/http2/h2_c1.c
>> httpd/httpd/trunk/modules/http2/h2_config.c
>> httpd/httpd/trunk/modules/http2/h2_config.h
>> httpd/httpd/trunk/modules/http2/h2_mplx.c
>> httpd/httpd/trunk/modules/http2/h2_mplx.h
>> httpd/httpd/trunk/modules/http2/h2_workers.c
>> httpd/httpd/trunk/modules/http2/h2_workers.h
>> httpd/httpd/trunk/modules/http2/mod_http2.c
>
>> Modified: httpd/httpd/trunk/modules/http2/h2_workers.c
>> URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http2/h2_workers.c?rev=1902005&r1=1902004&r2=1902005&view=diff
>> ==============================================================================
>> --- httpd/httpd/trunk/modules/http2/h2_workers.c (original)
>> +++ httpd/httpd/trunk/modules/http2/h2_workers.c Fri Jun 17 09:24:57 2022
>>
>> @@ -385,13 +415,13 @@ static apr_status_t workers_pool_cleanup
>> }
>>
>> h2_workers *h2_workers_create(server_rec *s, apr_pool_t *pchild,
>> - int min_workers, int max_workers,
>> - int idle_secs)
>> + int max_slots, int min_active, apr_time_t idle_limit)
>> {
>> apr_status_t rv;
>> h2_workers *workers;
>> apr_pool_t *pool;
>> - int i, n;
>> + apr_allocator_t *allocator;
>> + int i, locked = 0;
>>
>> ap_assert(s);
>> ap_assert(pchild);
>> @@ -401,7 +431,16 @@ h2_workers *h2_workers_create(server_rec
>> * guarded by our lock. Without this pool, all subpool creations would
>> * happen on the pool handed to us, which we do not guard.
>> */
>> - apr_pool_create(&pool, pchild);
>> + rv = apr_allocator_create(&allocator);
>> + if (rv != APR_SUCCESS) {
>> + goto cleanup;
>> + }
>> + rv = apr_pool_create_ex(&pool, pchild, NULL, allocator);
>> + if (rv != APR_SUCCESS) {
>> + apr_allocator_destroy(allocator);
>> + goto cleanup;
>> + }
>> + apr_allocator_owner_set(allocator, pool);
>> apr_pool_tag(pool, "h2_workers");
>> workers = apr_pcalloc(pool, sizeof(h2_workers));
>> if (!workers) {
>> @@ -410,19 +449,23 @@ h2_workers *h2_workers_create(server_rec
>>
>> workers->s = s;
>> workers->pool = pool;
>> - workers->min_workers = min_workers;
>> - workers->max_workers = max_workers;
>> - workers->max_idle_secs = (idle_secs > 0)? idle_secs : 10;
>> + workers->min_active = min_active;
>> + workers->max_slots = max_slots;
>> + workers->idle_limit = (idle_limit > 0)? idle_limit : apr_time_from_sec(10);
>> + workers->dynamic = (workers->min_active < workers->max_slots);
>> +
>> + ap_log_error(APLOG_MARK, APLOG_INFO, 0, workers->s,
>> + "h2_workers: created with min=%d max=%d idle_ms=%d",
>> + workers->min_active, workers->max_slots,
>> + (int)apr_time_as_msec(idle_limit));
>> +
>> + APR_RING_INIT(&workers->idle, h2_slot, link);
>> + APR_RING_INIT(&workers->busy, h2_slot, link);
>> + APR_RING_INIT(&workers->free, h2_slot, link);
>> + APR_RING_INIT(&workers->zombie, h2_slot, link);
>>
>> - ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, workers->s,
>> - "h2_workers: created with min=%d max=%d idle_timeout=%d sec",
>> - workers->min_workers, workers->max_workers,
>> - (int)workers->max_idle_secs);
>> - /* FIXME: the fifo set we use here has limited capacity. Once the
>> - * set is full, connections with new requests do a wait.
>> - */
>> - rv = h2_fifo_set_create(&workers->mplxs, pool, 16 * 1024);
>> - if (rv != APR_SUCCESS) goto cleanup;
>> + APR_RING_INIT(&workers->prod_active, ap_conn_producer_t, link);
>> + APR_RING_INIT(&workers->prod_idle, ap_conn_producer_t, link);
>>
>> rv = apr_threadattr_create(&workers->thread_attr, workers->pool);
>> if (rv != APR_SUCCESS) goto cleanup;
>> @@ -441,32 +484,35 @@ h2_workers *h2_workers_create(server_rec
>> if (rv != APR_SUCCESS) goto cleanup;
>> rv = apr_thread_cond_create(&workers->all_done, workers->pool);
>> if (rv != APR_SUCCESS) goto cleanup;
>> + rv = apr_thread_cond_create(&workers->prod_done, workers->pool);
>> + if (rv != APR_SUCCESS) goto cleanup;
>>
>> - n = workers->nslots = workers->max_workers;
>> - workers->slots = apr_pcalloc(workers->pool, n * sizeof(h2_slot));
>> - if (workers->slots == NULL) {
>> - n = workers->nslots = 0;
>> - rv = APR_ENOMEM;
>> - goto cleanup;
>> - }
>> - for (i = 0; i < n; ++i) {
>> + apr_thread_mutex_lock(workers->lock);
>> + locked = 1;
>> +
>> + /* create the slots and put them on the free list */
>> + workers->slots = apr_pcalloc(workers->pool, workers->max_slots * sizeof(h2_slot));
>> +
>> + for (i = 0; i < workers->max_slots; ++i) {
>> workers->slots[i].id = i;
>> - }
>> - /* we activate all for now, TODO: support min_workers again.
>> - * do this in reverse for vanity reasons so slot 0 will most
>> - * likely be at head of idle queue. */
>> - n = workers->min_workers;
>> - for (i = n-1; i >= 0; --i) {
>> - rv = activate_slot(workers, &workers->slots[i]);
>> + workers->slots[i].state = H2_SLOT_FREE;
>> + workers->slots[i].workers = workers;
>> + APR_RING_ELEM_INIT(&workers->slots[i], link);
>> + APR_RING_INSERT_TAIL(&workers->free, &workers->slots[i], h2_slot, link);
>> + rv = apr_thread_cond_create(&workers->slots[i].more_work, workers->pool);
>> if (rv != APR_SUCCESS) goto cleanup;
>> }
>> - /* the rest of the slots go on the free list */
>> - for(i = n; i < workers->nslots; ++i) {
>> - push_slot(&workers->free, &workers->slots[i]);
>> +
>> + /* activate the min amount of workers */
>> + for (i = 0; i < workers->min_active; ++i) {
>> + rv = activate_slot(workers);
>> + if (rv != APR_SUCCESS) goto cleanup;
>> }
>> - workers->dynamic = (workers->worker_count < workers->max_workers);
>>
>> cleanup:
>> + if (locked) {
>> + apr_thread_mutex_unlock(workers->lock);
>> + }
>> if (rv == APR_SUCCESS) {
>> /* Stop/join the workers threads when the MPM child exits (pchild is
>> * destroyed), and as a pre_cleanup of pchild thus before the threads
>> @@ -476,24 +522,84 @@ cleanup:
>> apr_pool_pre_cleanup_register(pchild, workers, workers_pool_cleanup);
>> return workers;
>> }
>> + ap_log_error(APLOG_MARK, APLOG_DEBUG, rv, workers->s,
>> + "h2_workers: errors initializing");
>> return NULL;
>> }
>
> Open style question with an ask for opinions: Would it be better to use
>
> pool instead of workers->pool
> min_active instead of workers->min_active
> max_slots instead of workers->max_slots
>
> ?
>
> Would that make the code more
>
> - readable
> - faster
>
> or is the current approach better?

Maybe it would be faster on older compilers. Modern optimizers will most likely generate the same code in either case, I believe.

Readability is important, but unfortunately also per definition subjective, of course. My way of thinking, which I *try* to follow these days is:

1. pass args n, m to xxx_init(n, m)
2. check n, m, for consistency, adapt default values and assign to xxx->n, xxx->m
3. it may be that n != xxx->n or m != xxx->m now
4. Log that init(n, m) lead to creation of xxx with its n, m
5. init the rest of xxx according to x->n and x->m

One could modify n, m params directly for defaults or other adjustments, then assign them to xxx. But then one can no longer log the details.

An example of this would be 'idle_limit' which, when 0, is assigned a default value. After that, code needs to use workers->idle_limit and not the idle_limit init parameter.

Because I make so many mistakes, I feel more comfortable using the assigned values that are also used in the rest of the code.

Cheers,
Stefan

> Regards
>
> Rüdiger
>
Re: svn commit: r1902005 - in /httpd/httpd/trunk/modules/http2: h2_c1.c h2_config.c h2_config.h h2_mplx.c h2_mplx.h h2_workers.c h2_workers.h mod_http2.c [ In reply to ]
On Fri, Jun 17, 2022 at 11:24 AM <icing@apache.org> wrote:
>
> --- httpd/httpd/trunk/modules/http2/h2_workers.h (original)
> +++ httpd/httpd/trunk/modules/http2/h2_workers.h Fri Jun 17 09:24:57 2022
> @@ -28,59 +28,94 @@ struct h2_mplx;
> struct h2_request;
> struct h2_fifo;
>
> -struct h2_slot;
> -
> typedef struct h2_workers h2_workers;
>
> -struct h2_workers {

Since h2_workers is now opaque, some debug code (-DAPR_POOL_DEBUG) now raises:

h2_mplx.c: In function ‘h2_mplx_c1_process’:
h2_mplx.c:717:46: error: dereferencing pointer to incomplete type
‘struct h2_workers’
mem_w = apr_pool_num_bytes(m->workers->pool, 1);

Not sure how to fix it though.

Cheers;
Yann.
Re: svn commit: r1902005 - in /httpd/httpd/trunk/modules/http2: h2_c1.c h2_config.c h2_config.h h2_mplx.c h2_mplx.h h2_workers.c h2_workers.h mod_http2.c [ In reply to ]
> Am 20.06.2022 um 13:08 schrieb Yann Ylavic <ylavic.dev@gmail.com>:
>
> On Fri, Jun 17, 2022 at 11:24 AM <icing@apache.org> wrote:
>>
>> --- httpd/httpd/trunk/modules/http2/h2_workers.h (original)
>> +++ httpd/httpd/trunk/modules/http2/h2_workers.h Fri Jun 17 09:24:57 2022
>> @@ -28,59 +28,94 @@ struct h2_mplx;
>> struct h2_request;
>> struct h2_fifo;
>>
>> -struct h2_slot;
>> -
>> typedef struct h2_workers h2_workers;
>>
>> -struct h2_workers {
>
> Since h2_workers is now opaque, some debug code (-DAPR_POOL_DEBUG) now raises:
>
> h2_mplx.c: In function ‘h2_mplx_c1_process’:
> h2_mplx.c:717:46: error: dereferencing pointer to incomplete type
> ‘struct h2_workers’
> mem_w = apr_pool_num_bytes(m->workers->pool, 1);
>
> Not sure how to fix it though.

I take that number out from the logging. If we want to debug the workers pool, we pbly should add something there which can then also use the lock for safety.

Cheers,
Stefan


>
> Cheers;
> Yann.
Re: svn commit: r1902005 - in /httpd/httpd/trunk/modules/http2: h2_c1.c h2_config.c h2_config.h h2_mplx.c h2_mplx.h h2_workers.c h2_workers.h mod_http2.c [ In reply to ]
On Mon, Jun 20, 2022 at 1:23 PM Stefan Eissing <stefan@eissing.org> wrote:
>
> > Am 20.06.2022 um 13:08 schrieb Yann Ylavic <ylavic.dev@gmail.com>:
> >
> > On Fri, Jun 17, 2022 at 11:24 AM <icing@apache.org> wrote:
> >>
> >> --- httpd/httpd/trunk/modules/http2/h2_workers.h (original)
> >> +++ httpd/httpd/trunk/modules/http2/h2_workers.h Fri Jun 17 09:24:57 2022
> >> @@ -28,59 +28,94 @@ struct h2_mplx;
> >> struct h2_request;
> >> struct h2_fifo;
> >>
> >> -struct h2_slot;
> >> -
> >> typedef struct h2_workers h2_workers;
> >>
> >> -struct h2_workers {
> >
> > Since h2_workers is now opaque, some debug code (-DAPR_POOL_DEBUG) now raises:
> >
> > h2_mplx.c: In function ‘h2_mplx_c1_process’:
> > h2_mplx.c:717:46: error: dereferencing pointer to incomplete type
> > ‘struct h2_workers’
> > mem_w = apr_pool_num_bytes(m->workers->pool, 1);
> >
> > Not sure how to fix it though.
>
> I take that number out from the logging. If we want to debug the workers pool, we pbly should add something there which can then also use the lock for safety.

Fine by me, thanks!

Regards;
Yann.