Mailing List Archive

1 2  View All
Re: Support for AARCH64 [ In reply to ]
Hola Guillaume,

Thank you for uploading the new packages!

I've just tested Ubuntu 20.04 and Centos 8

1) Ubuntu
1.1) curl -s
https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh
| sudo bash
1.2) apt install varnish - installs 20200615.weekly
All is OK!

2) Centos
2.1) curl -s
https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.rpm.sh
| sudo bash
This adds varnishcache_varnish-weekly and
varnishcache_varnish-weekly-source YUM repositories
2.2) yum install varnish - installs 6.0.2
2.3) yum --disablerepo="*" --enablerepo="varnishcache_varnish-weekly" list
available
Last metadata expiration check: 0:01:53 ago on Wed 17 Jun 2020 07:33:55 AM
UTC.

there are no packages in the new yum repository!

2.4) I was able to localinstall it though
2.4.1) yum install jemalloc
2.4.2) wget --content-disposition
https://packagecloud.io/varnishcache/varnish-weekly/packages/el/8/varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm
2.4.3) yum localinstall
varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm

Do I miss some step with the PackageCloud repository or there is some issue
?

Gracias,
Emilio

El mar., 16 jun. 2020 a las 18:39, Guillaume Quintard (<
guillaume@varnish-software.com>) escribió:

> Ola,
>
> Pål just pushed Monday's batch, so you get amd64 and aarch64 packages for
> all the platforms. Go forth and test, the paint is still very wet.
>
> Bonne journée!
>
> --
> Guillaume Quintard
>
>
> On Tue, Jun 16, 2020 at 5:28 AM Emilio Fernandes <
> emilio.fernandes70@gmail.com> wrote:
>
>> Hi,
>>
>> When we could expect the new aarch64 binaries at
>> https://packagecloud.io/varnishcache/varnish-weekly ?
>>
>> Gracias!
>> Emilio
>>
>> El mié., 15 abr. 2020 a las 14:33, Emilio Fernandes (<
>> emilio.fernandes70@gmail.com>) escribió:
>>
>>>
>>>
>>> El jue., 26 mar. 2020 a las 10:15, Martin Grigorov (<
>>> martin.grigorov@gmail.com>) escribió:
>>>
>>>> Hello,
>>>>
>>>> Here is the PR: https://github.com/varnishcache/varnish-cache/pull/3263
>>>> I will add some more documentation about the new setup.
>>>> Any feedback is welcome!
>>>>
>>>
>>> Nice work, Martin!
>>>
>>> Gracias!
>>> Emilio
>>>
>>>
>>>>
>>>> Regards,
>>>> Martin
>>>>
>>>> On Wed, Mar 25, 2020 at 9:55 PM Martin Grigorov <
>>>> martin.grigorov@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> On Wed, Mar 25, 2020, 20:15 Guillaume Quintard <
>>>>> guillaume@varnish-software.com> wrote:
>>>>>
>>>>>> is that script running as root?
>>>>>>
>>>>>
>>>>> Yes.
>>>>> I also added 'USER root' to its Dockerfile and '-u 0' to 'docker run'
>>>>> arguments but it still doesn't work.
>>>>> The x86 build is OK.
>>>>> It must be something in the base docker image.
>>>>> I've disabled the Alpine aarch64 job for now.
>>>>> I'll send a PR tomorrow!
>>>>>
>>>>> Regards,
>>>>> Martin
>>>>>
>>>>>
>>>>>> --
>>>>>> Guillaume Quintard
>>>>>>
>>>>>>
>>>>>> On Wed, Mar 25, 2020 at 2:30 AM Martin Grigorov <
>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I've moved 'dist' job to be executed in parallel with
>>>>>>> 'tar_pkg_tools' and the results from both are shared in the workspace for
>>>>>>> the actual packing jobs.
>>>>>>> Now the new error for aarch64-apk job is:
>>>>>>>
>>>>>>> abuild: varnish >>> varnish: Updating the sha512sums in APKBUILD...
>>>>>>> ]0; DEBUG: 4
>>>>>>> ]0;abuild: varnish >>> varnish: Building /varnish 6.4.0-r1 (using
>>>>>>> abuild 3.5.0-r0) started Wed, 25 Mar 2020 09:22:02 +0000
>>>>>>> >>> varnish: Checking sanity of /package/APKBUILD...
>>>>>>> >>> WARNING: varnish: No maintainer
>>>>>>> >>> varnish: Analyzing dependencies...
>>>>>>> 0% %
>>>>>>> ############################################>>> varnish: Installing for
>>>>>>> build: build-base gcc libc-dev libgcc pcre-dev ncurses-dev libedit-dev
>>>>>>> py-docutils linux-headers libunwind-dev python py3-sphinx
>>>>>>> Waiting for repository lock
>>>>>>> ERROR: Unable to lock database: Bad file descriptor
>>>>>>> ERROR: Failed to open apk database: Bad file descriptor
>>>>>>> >>> ERROR: varnish: builddeps failed
>>>>>>> ]0; >>> varnish: Uninstalling dependencies...
>>>>>>> Waiting for repository lock
>>>>>>> ERROR: Unable to lock database: Bad file descriptor
>>>>>>> ERROR: Failed to open apk database: Bad file descriptor
>>>>>>>
>>>>>>> Google suggested to do this:
>>>>>>> rm -rf /var/cache/apk
>>>>>>> mkdir /var/cache/apk
>>>>>>>
>>>>>>> It fails at 'abuild -r' -
>>>>>>> https://github.com/martin-g/varnish-cache/blob/b62c357b389c0e1e31e9c001cbffb55090c2e49f/.circleci/make-apk-packages.sh#L61
>>>>>>>
>>>>>>> Any hints ?
>>>>>>>
>>>>>>> Martin
>>>>>>>
>>>>>>> On Wed, Mar 25, 2020 at 2:39 AM Guillaume Quintard <
>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> So, you are pointing at the `dist` job, whose sole role is to
>>>>>>>> provide us with a dist tarball, so we don't need that command line to work
>>>>>>>> for everyone, just for that specific platform.
>>>>>>>>
>>>>>>>> On the other hand,
>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is
>>>>>>>> closer to what you want, `distcheck` will be call on all platform, and you
>>>>>>>> can see that it has the `--with-unwind` argument.
>>>>>>>> --
>>>>>>>> Guillaume Quintard
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov <
>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Mar 24, 2020, 17:19 Guillaume Quintard <
>>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>>
>>>>>>>>>> Compare your configure line with what's currently in use (or the
>>>>>>>>>> apkbuild file), there are a few options (with-unwind, without-jemalloc,
>>>>>>>>>> etc.) That need to be set
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> The configure line comes from "./autogen.des":
>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42
>>>>>>>>> It is called at:
>>>>>>>>>
>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40
>>>>>>>>> In my branch at:
>>>>>>>>>
>>>>>>>>> https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26
>>>>>>>>>
>>>>>>>>> It fails only on aarch64 for Alpine Linux. The x86_64 build for
>>>>>>>>> Alpine is fine.
>>>>>>>>> AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine.
>>>>>>>>>
>>>>>>>>> Martin
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On Tue, Mar 24, 2020, 08:05 Martin Grigorov <
>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov <
>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi Guillaume,
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard <
>>>>>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Martin,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thank you for that.
>>>>>>>>>>>>> A few remarks and questions:
>>>>>>>>>>>>> - how much time does the "docker build" step takes? We can
>>>>>>>>>>>>> possibly speed things up by push images to the dockerhub, as they don't
>>>>>>>>>>>>> need to change very often.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Definitely such optimization would be a good thing to do!
>>>>>>>>>>>> At the moment, with 'machine' executor it fetches the base
>>>>>>>>>>>> image and then builds all the Docker layers again and again.
>>>>>>>>>>>> Here are the timings:
>>>>>>>>>>>> 1) Spinning up a VM - around 10secs
>>>>>>>>>>>> 2) prepare env variables - 0secs
>>>>>>>>>>>> 3) checkout code (varnish-cache) - 5secs
>>>>>>>>>>>> 4) activate QEMU - 2secs
>>>>>>>>>>>> 5) build packages
>>>>>>>>>>>> 5.1) x86 deb - 3m 30secs
>>>>>>>>>>>> 5.2) x86 rpm - 2m 50secs
>>>>>>>>>>>> 5.3) aarch64 rpm - 35mins
>>>>>>>>>>>> 5.4) aarch64 deb - 45mins
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> - any reason why you clone pkg-varnish-cache in each job? The
>>>>>>>>>>>>> idea was to have it cloned once in tar-pkg-tools for consistency and
>>>>>>>>>>>>> reproducibility, which we lose here.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I will extract the common steps once I see it working. This is
>>>>>>>>>>>> my first CircleCI project and I still find my ways in it!
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> - do we want to change things for the amd64 platforms for the
>>>>>>>>>>>>> sake of consistency?
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> So far there is nothing specific for amd4 or aarch64, except
>>>>>>>>>>>> the base Docker images.
>>>>>>>>>>>> For example make-deb-packages.sh is reused for both amd64 and
>>>>>>>>>>>> aarch64 builds. Same for -rpm- and now for -apk- (alpine).
>>>>>>>>>>>>
>>>>>>>>>>>> Once I feel the change is almost finished I will open a Pull
>>>>>>>>>>>> Request for more comments!
>>>>>>>>>>>>
>>>>>>>>>>>> Martin
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Guillaume Quintard
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov <
>>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov <
>>>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov <
>>>>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hi Guillaume,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard <
>>>>>>>>>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Offering arm64 packages requires a few things:
>>>>>>>>>>>>>>>>> - arm64-compatible code (all good in
>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache)
>>>>>>>>>>>>>>>>> - arm64-compatible package framework (all good in
>>>>>>>>>>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache)
>>>>>>>>>>>>>>>>> - infrastructure to build the packages (uhoh, see below)
>>>>>>>>>>>>>>>>> - infrastructure to store and deliver (
>>>>>>>>>>>>>>>>> https://packagecloud.io/varnishcache)
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> So, everything is in place, expect for the third point. At
>>>>>>>>>>>>>>>>> the moment, there are two concurrent CI implementations:
>>>>>>>>>>>>>>>>> - travis:
>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's
>>>>>>>>>>>>>>>>> the historical one, and currently only runs compilation+test for OSX
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Actually it tests Linux AMD64 and ARM64 too.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> - circleci:
>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the
>>>>>>>>>>>>>>>>> new kid on the block, that builds all the packages and distchecks for all
>>>>>>>>>>>>>>>>> the packaged platforms
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The issue is that cirecleci doesn't support arm64
>>>>>>>>>>>>>>>>> containers (for now?), so we would need to re-implement the packaging logic
>>>>>>>>>>>>>>>>> in Travis. It's not a big problem, but it's currently not a priority on my
>>>>>>>>>>>>>>>>> side.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> However, I am totally ready to provide help if someone
>>>>>>>>>>>>>>>>> wants to take that up. The added benefit it that Travis would be able to
>>>>>>>>>>>>>>>>> handle everything and we can retire the circleci experiment
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I will take a look in the coming days and ask you if I need
>>>>>>>>>>>>>>>> help!
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I've took a look at the current setup and here is what I've
>>>>>>>>>>>>>>> found as problems and possible solutions:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 1) Circle CI
>>>>>>>>>>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run on
>>>>>>>>>>>>>>> x86_64, so there is no way to build the packages in a "native" environment
>>>>>>>>>>>>>>> 1.2) possible solutions
>>>>>>>>>>>>>>> 1.2.1) use multiarch cross build
>>>>>>>>>>>>>>> 1.2.2) use 'machine' executor that registers QEMU via
>>>>>>>>>>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and
>>>>>>>>>>>>>>> then builds and runs a custom Docker image that executes a shell script
>>>>>>>>>>>>>>> with the build steps
>>>>>>>>>>>>>>> It will look something like
>>>>>>>>>>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but
>>>>>>>>>>>>>>> instead of uploading the Docker image as a last step it will run it.
>>>>>>>>>>>>>>> The RPM and DEB build related code from current config.yml
>>>>>>>>>>>>>>> will be extracted into shell scripts which will be copied in the custom
>>>>>>>>>>>>>>> Docker images
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> From these two possible ways I have better picture in my
>>>>>>>>>>>>>>> head how to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what
>>>>>>>>>>>>>>> you'd prefer.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I've decided to stay with Circle CI and use 'machine'
>>>>>>>>>>>>>> executor with QEMU.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The changed config.yml could be seen at
>>>>>>>>>>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and
>>>>>>>>>>>>>> the build at
>>>>>>>>>>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8
>>>>>>>>>>>>>> The builds on x86 arch take 3-4 mins, but for aarch64
>>>>>>>>>>>>>> (emulation!) ~40mins
>>>>>>>>>>>>>> For now the jobs just build the .deb & .rpm packages for
>>>>>>>>>>>>>> CentOS 7 and Ubuntu 18.04, both amd64 and aarch64.
>>>>>>>>>>>>>> TODOs:
>>>>>>>>>>>>>> - migrate Alpine
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>> Build on Alpine aarch64 fails with:
>>>>>>>>>>> ...
>>>>>>>>>>> automake: this behaviour will change in future Automake
>>>>>>>>>>> versions: they will
>>>>>>>>>>> automake: unconditionally cause object files to be placed in the
>>>>>>>>>>> same subdirectory
>>>>>>>>>>> automake: of the corresponding sources.
>>>>>>>>>>> automake: project, to avoid future incompatibilities.
>>>>>>>>>>> parallel-tests: installing 'build-aux/test-driver'
>>>>>>>>>>> lib/libvmod_debug/Makefile.am:12: warning:
>>>>>>>>>>> libvmod_debug_la_LDFLAGS multiply defined in condition TRUE ...
>>>>>>>>>>> lib/libvmod_debug/automake_boilerplate.am:19: ...
>>>>>>>>>>> 'libvmod_debug_la_LDFLAGS' previously defined here
>>>>>>>>>>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/
>>>>>>>>>>> automake_boilerplate.am' included from here
>>>>>>>>>>> + autoconf
>>>>>>>>>>> + CONFIG_SHELL=/bin/sh
>>>>>>>>>>> + export CONFIG_SHELL
>>>>>>>>>>> + ./configure '--prefix=/opt/varnish'
>>>>>>>>>>> '--mandir=/opt/varnish/man' --enable-maintainer-mode
>>>>>>>>>>> --enable-developer-warnings --enable-debugging-symbols
>>>>>>>>>>> --enable-dependency-tracking --with-persistent-storage --quiet
>>>>>>>>>>> configure: WARNING: dot not found - build will fail if svg files
>>>>>>>>>>> are out of date.
>>>>>>>>>>> configure: WARNING: No system jemalloc found, using system malloc
>>>>>>>>>>> configure: error: Could not find backtrace() support
>>>>>>>>>>>
>>>>>>>>>>> Does anyone know a workaround ?
>>>>>>>>>>> I use multiarch/alpine:aarch64-edge as a base Docker image
>>>>>>>>>>>
>>>>>>>>>>> Martin
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> - store the packages as CircleCI artifacts
>>>>>>>>>>>>>> - anything else that is still missing
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Adding more architectures would be as easy as adding a new
>>>>>>>>>>>>>> Dockerfile with a base image from the respective type.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 2) Travis CI
>>>>>>>>>>>>>>> 2.1) problems
>>>>>>>>>>>>>>> 2.1.1) generally Travis is slower than Circle!
>>>>>>>>>>>>>>> Althought if we use CircleCI 'machine' executor it will be
>>>>>>>>>>>>>>> slower than the current 'Docker' executor!
>>>>>>>>>>>>>>> 2.1.2) Travis supports only Ubuntu
>>>>>>>>>>>>>>> Current setup at CircleCI uses CentOS 7.
>>>>>>>>>>>>>>> I guess the build steps won't have problems on Ubuntu.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 3) GitHub Actions
>>>>>>>>>>>>>>> GH Actions does not support ARM64 but it supports self
>>>>>>>>>>>>>>> hosted ARM64 runners
>>>>>>>>>>>>>>> 3.1) The problem is that there is no way to make a self
>>>>>>>>>>>>>>> hosted runner really private. I.e. if someone forks Varnish Cache any
>>>>>>>>>>>>>>> commit in the fork will trigger builds on the arm64 node. There is no way
>>>>>>>>>>>>>>> to reserve the runner only for commits against
>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Do you see other problems or maybe different ways ?
>>>>>>>>>>>>>>> Do you have preferences which way to go ?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> Guillaume Quintard
>>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>>> varnish-dev mailing list
>>>>>>>>>>>>>>>>> varnish-dev@varnish-cache.org
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
Re: Support for AARCH64 [ In reply to ]
Thank you Emilio, I'll contact packagecloud.io to see what's what.

--
Guillaume Quintard


On Wed, Jun 17, 2020 at 1:01 AM Emilio Fernandes <
emilio.fernandes70@gmail.com> wrote:

> Hola Guillaume,
>
> Thank you for uploading the new packages!
>
> I've just tested Ubuntu 20.04 and Centos 8
>
> 1) Ubuntu
> 1.1) curl -s
> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh
> | sudo bash
> 1.2) apt install varnish - installs 20200615.weekly
> All is OK!
>
> 2) Centos
> 2.1) curl -s
> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.rpm.sh
> | sudo bash
> This adds varnishcache_varnish-weekly and
> varnishcache_varnish-weekly-source YUM repositories
> 2.2) yum install varnish - installs 6.0.2
> 2.3) yum --disablerepo="*" --enablerepo="varnishcache_varnish-weekly" list
> available
> Last metadata expiration check: 0:01:53 ago on Wed 17 Jun 2020 07:33:55 AM
> UTC.
>
> there are no packages in the new yum repository!
>
> 2.4) I was able to localinstall it though
> 2.4.1) yum install jemalloc
> 2.4.2) wget --content-disposition
> https://packagecloud.io/varnishcache/varnish-weekly/packages/el/8/varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm
> 2.4.3) yum localinstall
> varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm
>
> Do I miss some step with the PackageCloud repository or there is some
> issue ?
>
> Gracias,
> Emilio
>
> El mar., 16 jun. 2020 a las 18:39, Guillaume Quintard (<
> guillaume@varnish-software.com>) escribió:
>
>> Ola,
>>
>> Pål just pushed Monday's batch, so you get amd64 and aarch64 packages for
>> all the platforms. Go forth and test, the paint is still very wet.
>>
>> Bonne journée!
>>
>> --
>> Guillaume Quintard
>>
>>
>> On Tue, Jun 16, 2020 at 5:28 AM Emilio Fernandes <
>> emilio.fernandes70@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> When we could expect the new aarch64 binaries at
>>> https://packagecloud.io/varnishcache/varnish-weekly ?
>>>
>>> Gracias!
>>> Emilio
>>>
>>> El mié., 15 abr. 2020 a las 14:33, Emilio Fernandes (<
>>> emilio.fernandes70@gmail.com>) escribió:
>>>
>>>>
>>>>
>>>> El jue., 26 mar. 2020 a las 10:15, Martin Grigorov (<
>>>> martin.grigorov@gmail.com>) escribió:
>>>>
>>>>> Hello,
>>>>>
>>>>> Here is the PR:
>>>>> https://github.com/varnishcache/varnish-cache/pull/3263
>>>>> I will add some more documentation about the new setup.
>>>>> Any feedback is welcome!
>>>>>
>>>>
>>>> Nice work, Martin!
>>>>
>>>> Gracias!
>>>> Emilio
>>>>
>>>>
>>>>>
>>>>> Regards,
>>>>> Martin
>>>>>
>>>>> On Wed, Mar 25, 2020 at 9:55 PM Martin Grigorov <
>>>>> martin.grigorov@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> On Wed, Mar 25, 2020, 20:15 Guillaume Quintard <
>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>
>>>>>>> is that script running as root?
>>>>>>>
>>>>>>
>>>>>> Yes.
>>>>>> I also added 'USER root' to its Dockerfile and '-u 0' to 'docker run'
>>>>>> arguments but it still doesn't work.
>>>>>> The x86 build is OK.
>>>>>> It must be something in the base docker image.
>>>>>> I've disabled the Alpine aarch64 job for now.
>>>>>> I'll send a PR tomorrow!
>>>>>>
>>>>>> Regards,
>>>>>> Martin
>>>>>>
>>>>>>
>>>>>>> --
>>>>>>> Guillaume Quintard
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Mar 25, 2020 at 2:30 AM Martin Grigorov <
>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I've moved 'dist' job to be executed in parallel with
>>>>>>>> 'tar_pkg_tools' and the results from both are shared in the workspace for
>>>>>>>> the actual packing jobs.
>>>>>>>> Now the new error for aarch64-apk job is:
>>>>>>>>
>>>>>>>> abuild: varnish >>> varnish: Updating the sha512sums in APKBUILD...
>>>>>>>> ]0; DEBUG: 4
>>>>>>>> ]0;abuild: varnish >>> varnish: Building /varnish 6.4.0-r1 (using
>>>>>>>> abuild 3.5.0-r0) started Wed, 25 Mar 2020 09:22:02 +0000
>>>>>>>> >>> varnish: Checking sanity of /package/APKBUILD...
>>>>>>>> >>> WARNING: varnish: No maintainer
>>>>>>>> >>> varnish: Analyzing dependencies...
>>>>>>>> 0% %
>>>>>>>> ############################################>>> varnish: Installing for
>>>>>>>> build: build-base gcc libc-dev libgcc pcre-dev ncurses-dev libedit-dev
>>>>>>>> py-docutils linux-headers libunwind-dev python py3-sphinx
>>>>>>>> Waiting for repository lock
>>>>>>>> ERROR: Unable to lock database: Bad file descriptor
>>>>>>>> ERROR: Failed to open apk database: Bad file descriptor
>>>>>>>> >>> ERROR: varnish: builddeps failed
>>>>>>>> ]0; >>> varnish: Uninstalling dependencies...
>>>>>>>> Waiting for repository lock
>>>>>>>> ERROR: Unable to lock database: Bad file descriptor
>>>>>>>> ERROR: Failed to open apk database: Bad file descriptor
>>>>>>>>
>>>>>>>> Google suggested to do this:
>>>>>>>> rm -rf /var/cache/apk
>>>>>>>> mkdir /var/cache/apk
>>>>>>>>
>>>>>>>> It fails at 'abuild -r' -
>>>>>>>> https://github.com/martin-g/varnish-cache/blob/b62c357b389c0e1e31e9c001cbffb55090c2e49f/.circleci/make-apk-packages.sh#L61
>>>>>>>>
>>>>>>>> Any hints ?
>>>>>>>>
>>>>>>>> Martin
>>>>>>>>
>>>>>>>> On Wed, Mar 25, 2020 at 2:39 AM Guillaume Quintard <
>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> So, you are pointing at the `dist` job, whose sole role is to
>>>>>>>>> provide us with a dist tarball, so we don't need that command line to work
>>>>>>>>> for everyone, just for that specific platform.
>>>>>>>>>
>>>>>>>>> On the other hand,
>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is
>>>>>>>>> closer to what you want, `distcheck` will be call on all platform, and you
>>>>>>>>> can see that it has the `--with-unwind` argument.
>>>>>>>>> --
>>>>>>>>> Guillaume Quintard
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov <
>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, Mar 24, 2020, 17:19 Guillaume Quintard <
>>>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Compare your configure line with what's currently in use (or the
>>>>>>>>>>> apkbuild file), there are a few options (with-unwind, without-jemalloc,
>>>>>>>>>>> etc.) That need to be set
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> The configure line comes from "./autogen.des":
>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42
>>>>>>>>>> It is called at:
>>>>>>>>>>
>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40
>>>>>>>>>> In my branch at:
>>>>>>>>>>
>>>>>>>>>> https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26
>>>>>>>>>>
>>>>>>>>>> It fails only on aarch64 for Alpine Linux. The x86_64 build for
>>>>>>>>>> Alpine is fine.
>>>>>>>>>> AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine.
>>>>>>>>>>
>>>>>>>>>> Martin
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> On Tue, Mar 24, 2020, 08:05 Martin Grigorov <
>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov <
>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Guillaume,
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard <
>>>>>>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi Martin,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thank you for that.
>>>>>>>>>>>>>> A few remarks and questions:
>>>>>>>>>>>>>> - how much time does the "docker build" step takes? We can
>>>>>>>>>>>>>> possibly speed things up by push images to the dockerhub, as they don't
>>>>>>>>>>>>>> need to change very often.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Definitely such optimization would be a good thing to do!
>>>>>>>>>>>>> At the moment, with 'machine' executor it fetches the base
>>>>>>>>>>>>> image and then builds all the Docker layers again and again.
>>>>>>>>>>>>> Here are the timings:
>>>>>>>>>>>>> 1) Spinning up a VM - around 10secs
>>>>>>>>>>>>> 2) prepare env variables - 0secs
>>>>>>>>>>>>> 3) checkout code (varnish-cache) - 5secs
>>>>>>>>>>>>> 4) activate QEMU - 2secs
>>>>>>>>>>>>> 5) build packages
>>>>>>>>>>>>> 5.1) x86 deb - 3m 30secs
>>>>>>>>>>>>> 5.2) x86 rpm - 2m 50secs
>>>>>>>>>>>>> 5.3) aarch64 rpm - 35mins
>>>>>>>>>>>>> 5.4) aarch64 deb - 45mins
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>> - any reason why you clone pkg-varnish-cache in each job? The
>>>>>>>>>>>>>> idea was to have it cloned once in tar-pkg-tools for consistency and
>>>>>>>>>>>>>> reproducibility, which we lose here.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> I will extract the common steps once I see it working. This is
>>>>>>>>>>>>> my first CircleCI project and I still find my ways in it!
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>> - do we want to change things for the amd64 platforms for the
>>>>>>>>>>>>>> sake of consistency?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> So far there is nothing specific for amd4 or aarch64, except
>>>>>>>>>>>>> the base Docker images.
>>>>>>>>>>>>> For example make-deb-packages.sh is reused for both amd64 and
>>>>>>>>>>>>> aarch64 builds. Same for -rpm- and now for -apk- (alpine).
>>>>>>>>>>>>>
>>>>>>>>>>>>> Once I feel the change is almost finished I will open a Pull
>>>>>>>>>>>>> Request for more comments!
>>>>>>>>>>>>>
>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> Guillaume Quintard
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov <
>>>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov <
>>>>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov <
>>>>>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Hi Guillaume,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard <
>>>>>>>>>>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Offering arm64 packages requires a few things:
>>>>>>>>>>>>>>>>>> - arm64-compatible code (all good in
>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache)
>>>>>>>>>>>>>>>>>> - arm64-compatible package framework (all good in
>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache)
>>>>>>>>>>>>>>>>>> - infrastructure to build the packages (uhoh, see below)
>>>>>>>>>>>>>>>>>> - infrastructure to store and deliver (
>>>>>>>>>>>>>>>>>> https://packagecloud.io/varnishcache)
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> So, everything is in place, expect for the third point.
>>>>>>>>>>>>>>>>>> At the moment, there are two concurrent CI implementations:
>>>>>>>>>>>>>>>>>> - travis:
>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's
>>>>>>>>>>>>>>>>>> the historical one, and currently only runs compilation+test for OSX
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Actually it tests Linux AMD64 and ARM64 too.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> - circleci:
>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the
>>>>>>>>>>>>>>>>>> new kid on the block, that builds all the packages and distchecks for all
>>>>>>>>>>>>>>>>>> the packaged platforms
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> The issue is that cirecleci doesn't support arm64
>>>>>>>>>>>>>>>>>> containers (for now?), so we would need to re-implement the packaging logic
>>>>>>>>>>>>>>>>>> in Travis. It's not a big problem, but it's currently not a priority on my
>>>>>>>>>>>>>>>>>> side.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> However, I am totally ready to provide help if someone
>>>>>>>>>>>>>>>>>> wants to take that up. The added benefit it that Travis would be able to
>>>>>>>>>>>>>>>>>> handle everything and we can retire the circleci experiment
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I will take a look in the coming days and ask you if I
>>>>>>>>>>>>>>>>> need help!
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I've took a look at the current setup and here is what I've
>>>>>>>>>>>>>>>> found as problems and possible solutions:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> 1) Circle CI
>>>>>>>>>>>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run on
>>>>>>>>>>>>>>>> x86_64, so there is no way to build the packages in a "native" environment
>>>>>>>>>>>>>>>> 1.2) possible solutions
>>>>>>>>>>>>>>>> 1.2.1) use multiarch cross build
>>>>>>>>>>>>>>>> 1.2.2) use 'machine' executor that registers QEMU via
>>>>>>>>>>>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and
>>>>>>>>>>>>>>>> then builds and runs a custom Docker image that executes a shell script
>>>>>>>>>>>>>>>> with the build steps
>>>>>>>>>>>>>>>> It will look something like
>>>>>>>>>>>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but
>>>>>>>>>>>>>>>> instead of uploading the Docker image as a last step it will run it.
>>>>>>>>>>>>>>>> The RPM and DEB build related code from current config.yml
>>>>>>>>>>>>>>>> will be extracted into shell scripts which will be copied in the custom
>>>>>>>>>>>>>>>> Docker images
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> From these two possible ways I have better picture in my
>>>>>>>>>>>>>>>> head how to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what
>>>>>>>>>>>>>>>> you'd prefer.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I've decided to stay with Circle CI and use 'machine'
>>>>>>>>>>>>>>> executor with QEMU.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The changed config.yml could be seen at
>>>>>>>>>>>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and
>>>>>>>>>>>>>>> the build at
>>>>>>>>>>>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8
>>>>>>>>>>>>>>> The builds on x86 arch take 3-4 mins, but for aarch64
>>>>>>>>>>>>>>> (emulation!) ~40mins
>>>>>>>>>>>>>>> For now the jobs just build the .deb & .rpm packages for
>>>>>>>>>>>>>>> CentOS 7 and Ubuntu 18.04, both amd64 and aarch64.
>>>>>>>>>>>>>>> TODOs:
>>>>>>>>>>>>>>> - migrate Alpine
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>> Build on Alpine aarch64 fails with:
>>>>>>>>>>>> ...
>>>>>>>>>>>> automake: this behaviour will change in future Automake
>>>>>>>>>>>> versions: they will
>>>>>>>>>>>> automake: unconditionally cause object files to be placed in
>>>>>>>>>>>> the same subdirectory
>>>>>>>>>>>> automake: of the corresponding sources.
>>>>>>>>>>>> automake: project, to avoid future incompatibilities.
>>>>>>>>>>>> parallel-tests: installing 'build-aux/test-driver'
>>>>>>>>>>>> lib/libvmod_debug/Makefile.am:12: warning:
>>>>>>>>>>>> libvmod_debug_la_LDFLAGS multiply defined in condition TRUE ...
>>>>>>>>>>>> lib/libvmod_debug/automake_boilerplate.am:19: ...
>>>>>>>>>>>> 'libvmod_debug_la_LDFLAGS' previously defined here
>>>>>>>>>>>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/
>>>>>>>>>>>> automake_boilerplate.am' included from here
>>>>>>>>>>>> + autoconf
>>>>>>>>>>>> + CONFIG_SHELL=/bin/sh
>>>>>>>>>>>> + export CONFIG_SHELL
>>>>>>>>>>>> + ./configure '--prefix=/opt/varnish'
>>>>>>>>>>>> '--mandir=/opt/varnish/man' --enable-maintainer-mode
>>>>>>>>>>>> --enable-developer-warnings --enable-debugging-symbols
>>>>>>>>>>>> --enable-dependency-tracking --with-persistent-storage --quiet
>>>>>>>>>>>> configure: WARNING: dot not found - build will fail if svg
>>>>>>>>>>>> files are out of date.
>>>>>>>>>>>> configure: WARNING: No system jemalloc found, using system
>>>>>>>>>>>> malloc
>>>>>>>>>>>> configure: error: Could not find backtrace() support
>>>>>>>>>>>>
>>>>>>>>>>>> Does anyone know a workaround ?
>>>>>>>>>>>> I use multiarch/alpine:aarch64-edge as a base Docker image
>>>>>>>>>>>>
>>>>>>>>>>>> Martin
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> - store the packages as CircleCI artifacts
>>>>>>>>>>>>>>> - anything else that is still missing
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Adding more architectures would be as easy as adding a new
>>>>>>>>>>>>>>> Dockerfile with a base image from the respective type.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> 2) Travis CI
>>>>>>>>>>>>>>>> 2.1) problems
>>>>>>>>>>>>>>>> 2.1.1) generally Travis is slower than Circle!
>>>>>>>>>>>>>>>> Althought if we use CircleCI 'machine' executor it will be
>>>>>>>>>>>>>>>> slower than the current 'Docker' executor!
>>>>>>>>>>>>>>>> 2.1.2) Travis supports only Ubuntu
>>>>>>>>>>>>>>>> Current setup at CircleCI uses CentOS 7.
>>>>>>>>>>>>>>>> I guess the build steps won't have problems on Ubuntu.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> 3) GitHub Actions
>>>>>>>>>>>>>>>> GH Actions does not support ARM64 but it supports self
>>>>>>>>>>>>>>>> hosted ARM64 runners
>>>>>>>>>>>>>>>> 3.1) The problem is that there is no way to make a self
>>>>>>>>>>>>>>>> hosted runner really private. I.e. if someone forks Varnish Cache any
>>>>>>>>>>>>>>>> commit in the fork will trigger builds on the arm64 node. There is no way
>>>>>>>>>>>>>>>> to reserve the runner only for commits against
>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Do you see other problems or maybe different ways ?
>>>>>>>>>>>>>>>> Do you have preferences which way to go ?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Guillaume Quintard
>>>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>>>> varnish-dev mailing list
>>>>>>>>>>>>>>>>>> varnish-dev@varnish-cache.org
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
Re: Support for AARCH64 [ In reply to ]
On 17/06/2020 10:00, Emilio Fernandes wrote:
> 1.1) curl -s
> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh
> | sudo bash

The fact that, with my listmaster head on, I have not censored this posting,
does not, *by any stretch*, imply any form of endorsement of this practice.

My personal 2 cents: DO NOT DO THIS. EVER. AND DO NOT POST THIS AS ADVISE TO OTHERS.

Thank you

_______________________________________________
varnish-dev mailing list
varnish-dev@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev
Re: Support for AARCH64 [ In reply to ]
On 6/17/20 16:56, Nils Goroll wrote:
> On 17/06/2020 10:00, Emilio Fernandes wrote:
>> 1.1) curl -s
>> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh
>> | sudo bash
>
> The fact that, with my listmaster head on, I have not censored this posting,
> does not, *by any stretch*, imply any form of endorsement of this practice.
>
> My personal 2 cents: DO NOT DO THIS. EVER. AND DO NOT POST THIS AS ADVISE TO OTHERS.
>
> Thank you

+1
To point fingers at the right people, this is what the packagecloud docs
tell you do.

But ... the *packagecloud docs* tell you to do that!

If I could have them arrested for it, I'd think about it.

Piping the response from a web site into a root shell is stark, raving
madness.


Stay safe,
Geoff
--
** * * UPLEX - Nils Goroll Systemoptimierung

Scheffelstraße 32
22301 Hamburg

Tel +49 40 2880 5731
Mob +49 176 636 90917
Fax +49 40 42949753

http://uplex.de
Re: Support for AARCH64 [ In reply to ]
On Wed, Jun 17, 2020 at 3:05 PM Geoff Simmons <geoff@uplex.de> wrote:
>
> On 6/17/20 16:56, Nils Goroll wrote:
> > On 17/06/2020 10:00, Emilio Fernandes wrote:
> >> 1.1) curl -s
> >> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh
> >> | sudo bash
> >
> > The fact that, with my listmaster head on, I have not censored this posting,
> > does not, *by any stretch*, imply any form of endorsement of this practice.
> >
> > My personal 2 cents: DO NOT DO THIS. EVER. AND DO NOT POST THIS AS ADVISE TO OTHERS.
> >
> > Thank you
>
> +1
> To point fingers at the right people, this is what the packagecloud docs
> tell you do.
>
> But ... the *packagecloud docs* tell you to do that!
>
> If I could have them arrested for it, I'd think about it.
>
> Piping the response from a web site into a root shell is stark, raving
> madness.

Dudes, chill out and live with your time.

It's not like attackers taking control of packagecloud could send a
different payload depending on whether you curl to disk to audit the
script or yolo curl to pipe.

https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side/

We've known for years that it isn't possible.

Dridi
_______________________________________________
varnish-dev mailing list
varnish-dev@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev
Re: Support for AARCH64 [ In reply to ]
Hi,

El mié., 17 jun. 2020 a las 17:56, Nils Goroll (<slink@schokola.de>)
escribió:

> On 17/06/2020 10:00, Emilio Fernandes wrote:
> > 1.1) curl -s
> >
> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh
> > | sudo bash
>
> The fact that, with my listmaster head on, I have not censored this
> posting,
> does not, *by any stretch*, imply any form of endorsement of this practice.
>
> My personal 2 cents: DO NOT DO THIS. EVER. AND DO NOT POST THIS AS ADVISE
> TO OTHERS.
>

Actually I thought about this and executed those inside fresh/throw-away
Docker containers.
I fully agree that one should not execute such unknown scripts blindly!

Emilio


>
> Thank you
>
>
Re: Support for AARCH64 [ In reply to ]
Hi Emilio,

On Wed, Jun 17, 2020 at 5:36 PM Guillaume Quintard <
guillaume@varnish-software.com> wrote:

> Thank you Emilio, I'll contact packagecloud.io to see what's what.
>
> --
> Guillaume Quintard
>
>
> On Wed, Jun 17, 2020 at 1:01 AM Emilio Fernandes <
> emilio.fernandes70@gmail.com> wrote:
>
>> Hola Guillaume,
>>
>> Thank you for uploading the new packages!
>>
>> I've just tested Ubuntu 20.04 and Centos 8
>>
>> 1) Ubuntu
>> 1.1) curl -s
>> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh
>> | sudo bash
>> 1.2) apt install varnish - installs 20200615.weekly
>> All is OK!
>>
>> 2) Centos
>> 2.1) curl -s
>> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.rpm.sh
>> | sudo bash
>> This adds varnishcache_varnish-weekly and
>> varnishcache_varnish-weekly-source YUM repositories
>> 2.2) yum install varnish - installs 6.0.2
>> 2.3) yum --disablerepo="*" --enablerepo="varnishcache_varnish-weekly"
>> list available
>> Last metadata expiration check: 0:01:53 ago on Wed 17 Jun 2020 07:33:55
>> AM UTC.
>>
>> there are no packages in the new yum repository!
>>
>
I am not sure whether you have noticed this answer by Dridi:
https://github.com/varnishcache/pkg-varnish-cache/issues/142#issuecomment-654380393
I've just tested your steps and indeed after `dnf module disable varnish` I
was able to install the weekly package on CentOS 8.

Regards,
Martin


>
>> 2.4) I was able to localinstall it though
>> 2.4.1) yum install jemalloc
>> 2.4.2) wget --content-disposition
>> https://packagecloud.io/varnishcache/varnish-weekly/packages/el/8/varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm
>> 2.4.3) yum localinstall
>> varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm
>>
>> Do I miss some step with the PackageCloud repository or there is some
>> issue ?
>>
>> Gracias,
>> Emilio
>>
>> El mar., 16 jun. 2020 a las 18:39, Guillaume Quintard (<
>> guillaume@varnish-software.com>) escribió:
>>
>>> Ola,
>>>
>>> Pål just pushed Monday's batch, so you get amd64 and aarch64 packages
>>> for all the platforms. Go forth and test, the paint is still very wet.
>>>
>>> Bonne journée!
>>>
>>> --
>>> Guillaume Quintard
>>>
>>>
>>> On Tue, Jun 16, 2020 at 5:28 AM Emilio Fernandes <
>>> emilio.fernandes70@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> When we could expect the new aarch64 binaries at
>>>> https://packagecloud.io/varnishcache/varnish-weekly ?
>>>>
>>>> Gracias!
>>>> Emilio
>>>>
>>>> El mié., 15 abr. 2020 a las 14:33, Emilio Fernandes (<
>>>> emilio.fernandes70@gmail.com>) escribió:
>>>>
>>>>>
>>>>>
>>>>> El jue., 26 mar. 2020 a las 10:15, Martin Grigorov (<
>>>>> martin.grigorov@gmail.com>) escribió:
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> Here is the PR:
>>>>>> https://github.com/varnishcache/varnish-cache/pull/3263
>>>>>> I will add some more documentation about the new setup.
>>>>>> Any feedback is welcome!
>>>>>>
>>>>>
>>>>> Nice work, Martin!
>>>>>
>>>>> Gracias!
>>>>> Emilio
>>>>>
>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> Martin
>>>>>>
>>>>>> On Wed, Mar 25, 2020 at 9:55 PM Martin Grigorov <
>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> On Wed, Mar 25, 2020, 20:15 Guillaume Quintard <
>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>
>>>>>>>> is that script running as root?
>>>>>>>>
>>>>>>>
>>>>>>> Yes.
>>>>>>> I also added 'USER root' to its Dockerfile and '-u 0' to 'docker
>>>>>>> run' arguments but it still doesn't work.
>>>>>>> The x86 build is OK.
>>>>>>> It must be something in the base docker image.
>>>>>>> I've disabled the Alpine aarch64 job for now.
>>>>>>> I'll send a PR tomorrow!
>>>>>>>
>>>>>>> Regards,
>>>>>>> Martin
>>>>>>>
>>>>>>>
>>>>>>>> --
>>>>>>>> Guillaume Quintard
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Mar 25, 2020 at 2:30 AM Martin Grigorov <
>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> I've moved 'dist' job to be executed in parallel with
>>>>>>>>> 'tar_pkg_tools' and the results from both are shared in the workspace for
>>>>>>>>> the actual packing jobs.
>>>>>>>>> Now the new error for aarch64-apk job is:
>>>>>>>>>
>>>>>>>>> abuild: varnish >>> varnish: Updating the sha512sums in APKBUILD...
>>>>>>>>> ]0; DEBUG: 4
>>>>>>>>> ]0;abuild: varnish >>> varnish: Building /varnish 6.4.0-r1 (using
>>>>>>>>> abuild 3.5.0-r0) started Wed, 25 Mar 2020 09:22:02 +0000
>>>>>>>>> >>> varnish: Checking sanity of /package/APKBUILD...
>>>>>>>>> >>> WARNING: varnish: No maintainer
>>>>>>>>> >>> varnish: Analyzing dependencies...
>>>>>>>>> 0% %
>>>>>>>>> ############################################>>> varnish: Installing for
>>>>>>>>> build: build-base gcc libc-dev libgcc pcre-dev ncurses-dev libedit-dev
>>>>>>>>> py-docutils linux-headers libunwind-dev python py3-sphinx
>>>>>>>>> Waiting for repository lock
>>>>>>>>> ERROR: Unable to lock database: Bad file descriptor
>>>>>>>>> ERROR: Failed to open apk database: Bad file descriptor
>>>>>>>>> >>> ERROR: varnish: builddeps failed
>>>>>>>>> ]0; >>> varnish: Uninstalling dependencies...
>>>>>>>>> Waiting for repository lock
>>>>>>>>> ERROR: Unable to lock database: Bad file descriptor
>>>>>>>>> ERROR: Failed to open apk database: Bad file descriptor
>>>>>>>>>
>>>>>>>>> Google suggested to do this:
>>>>>>>>> rm -rf /var/cache/apk
>>>>>>>>> mkdir /var/cache/apk
>>>>>>>>>
>>>>>>>>> It fails at 'abuild -r' -
>>>>>>>>> https://github.com/martin-g/varnish-cache/blob/b62c357b389c0e1e31e9c001cbffb55090c2e49f/.circleci/make-apk-packages.sh#L61
>>>>>>>>>
>>>>>>>>> Any hints ?
>>>>>>>>>
>>>>>>>>> Martin
>>>>>>>>>
>>>>>>>>> On Wed, Mar 25, 2020 at 2:39 AM Guillaume Quintard <
>>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> So, you are pointing at the `dist` job, whose sole role is to
>>>>>>>>>> provide us with a dist tarball, so we don't need that command line to work
>>>>>>>>>> for everyone, just for that specific platform.
>>>>>>>>>>
>>>>>>>>>> On the other hand,
>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is
>>>>>>>>>> closer to what you want, `distcheck` will be call on all platform, and you
>>>>>>>>>> can see that it has the `--with-unwind` argument.
>>>>>>>>>> --
>>>>>>>>>> Guillaume Quintard
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov <
>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Mar 24, 2020, 17:19 Guillaume Quintard <
>>>>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Compare your configure line with what's currently in use (or
>>>>>>>>>>>> the apkbuild file), there are a few options (with-unwind, without-jemalloc,
>>>>>>>>>>>> etc.) That need to be set
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> The configure line comes from "./autogen.des":
>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42
>>>>>>>>>>> It is called at:
>>>>>>>>>>>
>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40
>>>>>>>>>>> In my branch at:
>>>>>>>>>>>
>>>>>>>>>>> https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26
>>>>>>>>>>>
>>>>>>>>>>> It fails only on aarch64 for Alpine Linux. The x86_64 build for
>>>>>>>>>>> Alpine is fine.
>>>>>>>>>>> AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine.
>>>>>>>>>>>
>>>>>>>>>>> Martin
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Mar 24, 2020, 08:05 Martin Grigorov <
>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov <
>>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi Guillaume,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard <
>>>>>>>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi Martin,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thank you for that.
>>>>>>>>>>>>>>> A few remarks and questions:
>>>>>>>>>>>>>>> - how much time does the "docker build" step takes? We can
>>>>>>>>>>>>>>> possibly speed things up by push images to the dockerhub, as they don't
>>>>>>>>>>>>>>> need to change very often.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Definitely such optimization would be a good thing to do!
>>>>>>>>>>>>>> At the moment, with 'machine' executor it fetches the base
>>>>>>>>>>>>>> image and then builds all the Docker layers again and again.
>>>>>>>>>>>>>> Here are the timings:
>>>>>>>>>>>>>> 1) Spinning up a VM - around 10secs
>>>>>>>>>>>>>> 2) prepare env variables - 0secs
>>>>>>>>>>>>>> 3) checkout code (varnish-cache) - 5secs
>>>>>>>>>>>>>> 4) activate QEMU - 2secs
>>>>>>>>>>>>>> 5) build packages
>>>>>>>>>>>>>> 5.1) x86 deb - 3m 30secs
>>>>>>>>>>>>>> 5.2) x86 rpm - 2m 50secs
>>>>>>>>>>>>>> 5.3) aarch64 rpm - 35mins
>>>>>>>>>>>>>> 5.4) aarch64 deb - 45mins
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> - any reason why you clone pkg-varnish-cache in each job?
>>>>>>>>>>>>>>> The idea was to have it cloned once in tar-pkg-tools for consistency and
>>>>>>>>>>>>>>> reproducibility, which we lose here.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I will extract the common steps once I see it working. This
>>>>>>>>>>>>>> is my first CircleCI project and I still find my ways in it!
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> - do we want to change things for the amd64 platforms for
>>>>>>>>>>>>>>> the sake of consistency?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So far there is nothing specific for amd4 or aarch64, except
>>>>>>>>>>>>>> the base Docker images.
>>>>>>>>>>>>>> For example make-deb-packages.sh is reused for both amd64 and
>>>>>>>>>>>>>> aarch64 builds. Same for -rpm- and now for -apk- (alpine).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Once I feel the change is almost finished I will open a Pull
>>>>>>>>>>>>>> Request for more comments!
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Guillaume Quintard
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov <
>>>>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov <
>>>>>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov <
>>>>>>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Hi Guillaume,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard <
>>>>>>>>>>>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Offering arm64 packages requires a few things:
>>>>>>>>>>>>>>>>>>> - arm64-compatible code (all good in
>>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache)
>>>>>>>>>>>>>>>>>>> - arm64-compatible package framework (all good in
>>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache)
>>>>>>>>>>>>>>>>>>> - infrastructure to build the packages (uhoh, see below)
>>>>>>>>>>>>>>>>>>> - infrastructure to store and deliver (
>>>>>>>>>>>>>>>>>>> https://packagecloud.io/varnishcache)
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> So, everything is in place, expect for the third point.
>>>>>>>>>>>>>>>>>>> At the moment, there are two concurrent CI implementations:
>>>>>>>>>>>>>>>>>>> - travis:
>>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's
>>>>>>>>>>>>>>>>>>> the historical one, and currently only runs compilation+test for OSX
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Actually it tests Linux AMD64 and ARM64 too.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> - circleci:
>>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the
>>>>>>>>>>>>>>>>>>> new kid on the block, that builds all the packages and distchecks for all
>>>>>>>>>>>>>>>>>>> the packaged platforms
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The issue is that cirecleci doesn't support arm64
>>>>>>>>>>>>>>>>>>> containers (for now?), so we would need to re-implement the packaging logic
>>>>>>>>>>>>>>>>>>> in Travis. It's not a big problem, but it's currently not a priority on my
>>>>>>>>>>>>>>>>>>> side.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> However, I am totally ready to provide help if someone
>>>>>>>>>>>>>>>>>>> wants to take that up. The added benefit it that Travis would be able to
>>>>>>>>>>>>>>>>>>> handle everything and we can retire the circleci experiment
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I will take a look in the coming days and ask you if I
>>>>>>>>>>>>>>>>>> need help!
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I've took a look at the current setup and here is what
>>>>>>>>>>>>>>>>> I've found as problems and possible solutions:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 1) Circle CI
>>>>>>>>>>>>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run on
>>>>>>>>>>>>>>>>> x86_64, so there is no way to build the packages in a "native" environment
>>>>>>>>>>>>>>>>> 1.2) possible solutions
>>>>>>>>>>>>>>>>> 1.2.1) use multiarch cross build
>>>>>>>>>>>>>>>>> 1.2.2) use 'machine' executor that registers QEMU via
>>>>>>>>>>>>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and
>>>>>>>>>>>>>>>>> then builds and runs a custom Docker image that executes a shell script
>>>>>>>>>>>>>>>>> with the build steps
>>>>>>>>>>>>>>>>> It will look something like
>>>>>>>>>>>>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but
>>>>>>>>>>>>>>>>> instead of uploading the Docker image as a last step it will run it.
>>>>>>>>>>>>>>>>> The RPM and DEB build related code from current config.yml
>>>>>>>>>>>>>>>>> will be extracted into shell scripts which will be copied in the custom
>>>>>>>>>>>>>>>>> Docker images
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> From these two possible ways I have better picture in my
>>>>>>>>>>>>>>>>> head how to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what
>>>>>>>>>>>>>>>>> you'd prefer.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I've decided to stay with Circle CI and use 'machine'
>>>>>>>>>>>>>>>> executor with QEMU.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> The changed config.yml could be seen at
>>>>>>>>>>>>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and
>>>>>>>>>>>>>>>> the build at
>>>>>>>>>>>>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8
>>>>>>>>>>>>>>>> The builds on x86 arch take 3-4 mins, but for aarch64
>>>>>>>>>>>>>>>> (emulation!) ~40mins
>>>>>>>>>>>>>>>> For now the jobs just build the .deb & .rpm packages for
>>>>>>>>>>>>>>>> CentOS 7 and Ubuntu 18.04, both amd64 and aarch64.
>>>>>>>>>>>>>>>> TODOs:
>>>>>>>>>>>>>>>> - migrate Alpine
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>> Build on Alpine aarch64 fails with:
>>>>>>>>>>>>> ...
>>>>>>>>>>>>> automake: this behaviour will change in future Automake
>>>>>>>>>>>>> versions: they will
>>>>>>>>>>>>> automake: unconditionally cause object files to be placed in
>>>>>>>>>>>>> the same subdirectory
>>>>>>>>>>>>> automake: of the corresponding sources.
>>>>>>>>>>>>> automake: project, to avoid future incompatibilities.
>>>>>>>>>>>>> parallel-tests: installing 'build-aux/test-driver'
>>>>>>>>>>>>> lib/libvmod_debug/Makefile.am:12: warning:
>>>>>>>>>>>>> libvmod_debug_la_LDFLAGS multiply defined in condition TRUE ...
>>>>>>>>>>>>> lib/libvmod_debug/automake_boilerplate.am:19: ...
>>>>>>>>>>>>> 'libvmod_debug_la_LDFLAGS' previously defined here
>>>>>>>>>>>>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/
>>>>>>>>>>>>> automake_boilerplate.am' included from here
>>>>>>>>>>>>> + autoconf
>>>>>>>>>>>>> + CONFIG_SHELL=/bin/sh
>>>>>>>>>>>>> + export CONFIG_SHELL
>>>>>>>>>>>>> + ./configure '--prefix=/opt/varnish'
>>>>>>>>>>>>> '--mandir=/opt/varnish/man' --enable-maintainer-mode
>>>>>>>>>>>>> --enable-developer-warnings --enable-debugging-symbols
>>>>>>>>>>>>> --enable-dependency-tracking --with-persistent-storage --quiet
>>>>>>>>>>>>> configure: WARNING: dot not found - build will fail if svg
>>>>>>>>>>>>> files are out of date.
>>>>>>>>>>>>> configure: WARNING: No system jemalloc found, using system
>>>>>>>>>>>>> malloc
>>>>>>>>>>>>> configure: error: Could not find backtrace() support
>>>>>>>>>>>>>
>>>>>>>>>>>>> Does anyone know a workaround ?
>>>>>>>>>>>>> I use multiarch/alpine:aarch64-edge as a base Docker image
>>>>>>>>>>>>>
>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>> - store the packages as CircleCI artifacts
>>>>>>>>>>>>>>>> - anything else that is still missing
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Adding more architectures would be as easy as adding a new
>>>>>>>>>>>>>>>> Dockerfile with a base image from the respective type.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2) Travis CI
>>>>>>>>>>>>>>>>> 2.1) problems
>>>>>>>>>>>>>>>>> 2.1.1) generally Travis is slower than Circle!
>>>>>>>>>>>>>>>>> Althought if we use CircleCI 'machine' executor it will be
>>>>>>>>>>>>>>>>> slower than the current 'Docker' executor!
>>>>>>>>>>>>>>>>> 2.1.2) Travis supports only Ubuntu
>>>>>>>>>>>>>>>>> Current setup at CircleCI uses CentOS 7.
>>>>>>>>>>>>>>>>> I guess the build steps won't have problems on Ubuntu.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 3) GitHub Actions
>>>>>>>>>>>>>>>>> GH Actions does not support ARM64 but it supports self
>>>>>>>>>>>>>>>>> hosted ARM64 runners
>>>>>>>>>>>>>>>>> 3.1) The problem is that there is no way to make a self
>>>>>>>>>>>>>>>>> hosted runner really private. I.e. if someone forks Varnish Cache any
>>>>>>>>>>>>>>>>> commit in the fork will trigger builds on the arm64 node. There is no way
>>>>>>>>>>>>>>>>> to reserve the runner only for commits against
>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Do you see other problems or maybe different ways ?
>>>>>>>>>>>>>>>>> Do you have preferences which way to go ?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> Guillaume Quintard
>>>>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>>>>> varnish-dev mailing list
>>>>>>>>>>>>>>>>>>> varnish-dev@varnish-cache.org
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
Re: Support for AARCH64 [ In reply to ]
Hi Martin,

El mar., 14 jul. 2020 a las 15:21, Martin Grigorov (<
martin.grigorov@gmail.com>) escribió:

> Hi Emilio,
>
> On Wed, Jun 17, 2020 at 5:36 PM Guillaume Quintard <
> guillaume@varnish-software.com> wrote:
>
>> Thank you Emilio, I'll contact packagecloud.io to see what's what.
>>
>> --
>> Guillaume Quintard
>>
>>
>> On Wed, Jun 17, 2020 at 1:01 AM Emilio Fernandes <
>> emilio.fernandes70@gmail.com> wrote:
>>
>>> Hola Guillaume,
>>>
>>> Thank you for uploading the new packages!
>>>
>>> I've just tested Ubuntu 20.04 and Centos 8
>>>
>>> 1) Ubuntu
>>> 1.1) curl -s
>>> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh
>>> | sudo bash
>>> 1.2) apt install varnish - installs 20200615.weekly
>>> All is OK!
>>>
>>> 2) Centos
>>> 2.1) curl -s
>>> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.rpm.sh
>>> | sudo bash
>>> This adds varnishcache_varnish-weekly and
>>> varnishcache_varnish-weekly-source YUM repositories
>>> 2.2) yum install varnish - installs 6.0.2
>>> 2.3) yum --disablerepo="*" --enablerepo="varnishcache_varnish-weekly"
>>> list available
>>> Last metadata expiration check: 0:01:53 ago on Wed 17 Jun 2020 07:33:55
>>> AM UTC.
>>>
>>> there are no packages in the new yum repository!
>>>
>>
> I am not sure whether you have noticed this answer by Dridi:
> https://github.com/varnishcache/pkg-varnish-cache/issues/142#issuecomment-654380393
> I've just tested your steps and indeed after `dnf module disable varnish`
> I was able to install the weekly package on CentOS 8.
>

No, I wasn't aware of this discussion.
The weekly package installed successfully now!
Thank you!

Emilio


>
> Regards,
> Martin
>
>
>>
>>> 2.4) I was able to localinstall it though
>>> 2.4.1) yum install jemalloc
>>> 2.4.2) wget --content-disposition
>>> https://packagecloud.io/varnishcache/varnish-weekly/packages/el/8/varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm
>>> 2.4.3) yum localinstall
>>> varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm
>>>
>>> Do I miss some step with the PackageCloud repository or there is some
>>> issue ?
>>>
>>> Gracias,
>>> Emilio
>>>
>>> El mar., 16 jun. 2020 a las 18:39, Guillaume Quintard (<
>>> guillaume@varnish-software.com>) escribió:
>>>
>>>> Ola,
>>>>
>>>> Pål just pushed Monday's batch, so you get amd64 and aarch64 packages
>>>> for all the platforms. Go forth and test, the paint is still very wet.
>>>>
>>>> Bonne journée!
>>>>
>>>> --
>>>> Guillaume Quintard
>>>>
>>>>
>>>> On Tue, Jun 16, 2020 at 5:28 AM Emilio Fernandes <
>>>> emilio.fernandes70@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> When we could expect the new aarch64 binaries at
>>>>> https://packagecloud.io/varnishcache/varnish-weekly ?
>>>>>
>>>>> Gracias!
>>>>> Emilio
>>>>>
>>>>> El mié., 15 abr. 2020 a las 14:33, Emilio Fernandes (<
>>>>> emilio.fernandes70@gmail.com>) escribió:
>>>>>
>>>>>>
>>>>>>
>>>>>> El jue., 26 mar. 2020 a las 10:15, Martin Grigorov (<
>>>>>> martin.grigorov@gmail.com>) escribió:
>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> Here is the PR:
>>>>>>> https://github.com/varnishcache/varnish-cache/pull/3263
>>>>>>> I will add some more documentation about the new setup.
>>>>>>> Any feedback is welcome!
>>>>>>>
>>>>>>
>>>>>> Nice work, Martin!
>>>>>>
>>>>>> Gracias!
>>>>>> Emilio
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Regards,
>>>>>>> Martin
>>>>>>>
>>>>>>> On Wed, Mar 25, 2020 at 9:55 PM Martin Grigorov <
>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> On Wed, Mar 25, 2020, 20:15 Guillaume Quintard <
>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>
>>>>>>>>> is that script running as root?
>>>>>>>>>
>>>>>>>>
>>>>>>>> Yes.
>>>>>>>> I also added 'USER root' to its Dockerfile and '-u 0' to 'docker
>>>>>>>> run' arguments but it still doesn't work.
>>>>>>>> The x86 build is OK.
>>>>>>>> It must be something in the base docker image.
>>>>>>>> I've disabled the Alpine aarch64 job for now.
>>>>>>>> I'll send a PR tomorrow!
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Martin
>>>>>>>>
>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Guillaume Quintard
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Wed, Mar 25, 2020 at 2:30 AM Martin Grigorov <
>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> I've moved 'dist' job to be executed in parallel with
>>>>>>>>>> 'tar_pkg_tools' and the results from both are shared in the workspace for
>>>>>>>>>> the actual packing jobs.
>>>>>>>>>> Now the new error for aarch64-apk job is:
>>>>>>>>>>
>>>>>>>>>> abuild: varnish >>> varnish: Updating the sha512sums in
>>>>>>>>>> APKBUILD...
>>>>>>>>>> ]0; DEBUG: 4
>>>>>>>>>> ]0;abuild: varnish >>> varnish: Building /varnish 6.4.0-r1 (using
>>>>>>>>>> abuild 3.5.0-r0) started Wed, 25 Mar 2020 09:22:02 +0000
>>>>>>>>>> >>> varnish: Checking sanity of /package/APKBUILD...
>>>>>>>>>> >>> WARNING: varnish: No maintainer
>>>>>>>>>> >>> varnish: Analyzing dependencies...
>>>>>>>>>> 0% %
>>>>>>>>>> ############################################>>> varnish: Installing for
>>>>>>>>>> build: build-base gcc libc-dev libgcc pcre-dev ncurses-dev libedit-dev
>>>>>>>>>> py-docutils linux-headers libunwind-dev python py3-sphinx
>>>>>>>>>> Waiting for repository lock
>>>>>>>>>> ERROR: Unable to lock database: Bad file descriptor
>>>>>>>>>> ERROR: Failed to open apk database: Bad file descriptor
>>>>>>>>>> >>> ERROR: varnish: builddeps failed
>>>>>>>>>> ]0; >>> varnish: Uninstalling dependencies...
>>>>>>>>>> Waiting for repository lock
>>>>>>>>>> ERROR: Unable to lock database: Bad file descriptor
>>>>>>>>>> ERROR: Failed to open apk database: Bad file descriptor
>>>>>>>>>>
>>>>>>>>>> Google suggested to do this:
>>>>>>>>>> rm -rf /var/cache/apk
>>>>>>>>>> mkdir /var/cache/apk
>>>>>>>>>>
>>>>>>>>>> It fails at 'abuild -r' -
>>>>>>>>>> https://github.com/martin-g/varnish-cache/blob/b62c357b389c0e1e31e9c001cbffb55090c2e49f/.circleci/make-apk-packages.sh#L61
>>>>>>>>>>
>>>>>>>>>> Any hints ?
>>>>>>>>>>
>>>>>>>>>> Martin
>>>>>>>>>>
>>>>>>>>>> On Wed, Mar 25, 2020 at 2:39 AM Guillaume Quintard <
>>>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> So, you are pointing at the `dist` job, whose sole role is to
>>>>>>>>>>> provide us with a dist tarball, so we don't need that command line to work
>>>>>>>>>>> for everyone, just for that specific platform.
>>>>>>>>>>>
>>>>>>>>>>> On the other hand,
>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is
>>>>>>>>>>> closer to what you want, `distcheck` will be call on all platform, and you
>>>>>>>>>>> can see that it has the `--with-unwind` argument.
>>>>>>>>>>> --
>>>>>>>>>>> Guillaume Quintard
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov <
>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Mar 24, 2020, 17:19 Guillaume Quintard <
>>>>>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Compare your configure line with what's currently in use (or
>>>>>>>>>>>>> the apkbuild file), there are a few options (with-unwind, without-jemalloc,
>>>>>>>>>>>>> etc.) That need to be set
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> The configure line comes from "./autogen.des":
>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42
>>>>>>>>>>>> It is called at:
>>>>>>>>>>>>
>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40
>>>>>>>>>>>> In my branch at:
>>>>>>>>>>>>
>>>>>>>>>>>> https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26
>>>>>>>>>>>>
>>>>>>>>>>>> It fails only on aarch64 for Alpine Linux. The x86_64 build for
>>>>>>>>>>>> Alpine is fine.
>>>>>>>>>>>> AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine.
>>>>>>>>>>>>
>>>>>>>>>>>> Martin
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> On Tue, Mar 24, 2020, 08:05 Martin Grigorov <
>>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov <
>>>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi Guillaume,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard <
>>>>>>>>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hi Martin,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Thank you for that.
>>>>>>>>>>>>>>>> A few remarks and questions:
>>>>>>>>>>>>>>>> - how much time does the "docker build" step takes? We can
>>>>>>>>>>>>>>>> possibly speed things up by push images to the dockerhub, as they don't
>>>>>>>>>>>>>>>> need to change very often.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Definitely such optimization would be a good thing to do!
>>>>>>>>>>>>>>> At the moment, with 'machine' executor it fetches the base
>>>>>>>>>>>>>>> image and then builds all the Docker layers again and again.
>>>>>>>>>>>>>>> Here are the timings:
>>>>>>>>>>>>>>> 1) Spinning up a VM - around 10secs
>>>>>>>>>>>>>>> 2) prepare env variables - 0secs
>>>>>>>>>>>>>>> 3) checkout code (varnish-cache) - 5secs
>>>>>>>>>>>>>>> 4) activate QEMU - 2secs
>>>>>>>>>>>>>>> 5) build packages
>>>>>>>>>>>>>>> 5.1) x86 deb - 3m 30secs
>>>>>>>>>>>>>>> 5.2) x86 rpm - 2m 50secs
>>>>>>>>>>>>>>> 5.3) aarch64 rpm - 35mins
>>>>>>>>>>>>>>> 5.4) aarch64 deb - 45mins
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> - any reason why you clone pkg-varnish-cache in each job?
>>>>>>>>>>>>>>>> The idea was to have it cloned once in tar-pkg-tools for consistency and
>>>>>>>>>>>>>>>> reproducibility, which we lose here.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I will extract the common steps once I see it working. This
>>>>>>>>>>>>>>> is my first CircleCI project and I still find my ways in it!
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> - do we want to change things for the amd64 platforms for
>>>>>>>>>>>>>>>> the sake of consistency?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> So far there is nothing specific for amd4 or aarch64, except
>>>>>>>>>>>>>>> the base Docker images.
>>>>>>>>>>>>>>> For example make-deb-packages.sh is reused for both amd64
>>>>>>>>>>>>>>> and aarch64 builds. Same for -rpm- and now for -apk- (alpine).
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Once I feel the change is almost finished I will open a Pull
>>>>>>>>>>>>>>> Request for more comments!
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> Guillaume Quintard
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov <
>>>>>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov <
>>>>>>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov <
>>>>>>>>>>>>>>>>>> martin.grigorov@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Hi Guillaume,
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard <
>>>>>>>>>>>>>>>>>>> guillaume@varnish-software.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Offering arm64 packages requires a few things:
>>>>>>>>>>>>>>>>>>>> - arm64-compatible code (all good in
>>>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache)
>>>>>>>>>>>>>>>>>>>> - arm64-compatible package framework (all good in
>>>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache)
>>>>>>>>>>>>>>>>>>>> - infrastructure to build the packages (uhoh, see below)
>>>>>>>>>>>>>>>>>>>> - infrastructure to store and deliver (
>>>>>>>>>>>>>>>>>>>> https://packagecloud.io/varnishcache)
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> So, everything is in place, expect for the third point.
>>>>>>>>>>>>>>>>>>>> At the moment, there are two concurrent CI implementations:
>>>>>>>>>>>>>>>>>>>> - travis:
>>>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's
>>>>>>>>>>>>>>>>>>>> the historical one, and currently only runs compilation+test for OSX
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Actually it tests Linux AMD64 and ARM64 too.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> - circleci:
>>>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the
>>>>>>>>>>>>>>>>>>>> new kid on the block, that builds all the packages and distchecks for all
>>>>>>>>>>>>>>>>>>>> the packaged platforms
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> The issue is that cirecleci doesn't support arm64
>>>>>>>>>>>>>>>>>>>> containers (for now?), so we would need to re-implement the packaging logic
>>>>>>>>>>>>>>>>>>>> in Travis. It's not a big problem, but it's currently not a priority on my
>>>>>>>>>>>>>>>>>>>> side.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> However, I am totally ready to provide help if someone
>>>>>>>>>>>>>>>>>>>> wants to take that up. The added benefit it that Travis would be able to
>>>>>>>>>>>>>>>>>>>> handle everything and we can retire the circleci experiment
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I will take a look in the coming days and ask you if I
>>>>>>>>>>>>>>>>>>> need help!
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I've took a look at the current setup and here is what
>>>>>>>>>>>>>>>>>> I've found as problems and possible solutions:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> 1) Circle CI
>>>>>>>>>>>>>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run
>>>>>>>>>>>>>>>>>> on x86_64, so there is no way to build the packages in a "native"
>>>>>>>>>>>>>>>>>> environment
>>>>>>>>>>>>>>>>>> 1.2) possible solutions
>>>>>>>>>>>>>>>>>> 1.2.1) use multiarch cross build
>>>>>>>>>>>>>>>>>> 1.2.2) use 'machine' executor that registers QEMU via
>>>>>>>>>>>>>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and
>>>>>>>>>>>>>>>>>> then builds and runs a custom Docker image that executes a shell script
>>>>>>>>>>>>>>>>>> with the build steps
>>>>>>>>>>>>>>>>>> It will look something like
>>>>>>>>>>>>>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but
>>>>>>>>>>>>>>>>>> instead of uploading the Docker image as a last step it will run it.
>>>>>>>>>>>>>>>>>> The RPM and DEB build related code from current
>>>>>>>>>>>>>>>>>> config.yml will be extracted into shell scripts which will be copied in the
>>>>>>>>>>>>>>>>>> custom Docker images
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> From these two possible ways I have better picture in my
>>>>>>>>>>>>>>>>>> head how to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what
>>>>>>>>>>>>>>>>>> you'd prefer.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I've decided to stay with Circle CI and use 'machine'
>>>>>>>>>>>>>>>>> executor with QEMU.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The changed config.yml could be seen at
>>>>>>>>>>>>>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and
>>>>>>>>>>>>>>>>> the build at
>>>>>>>>>>>>>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8
>>>>>>>>>>>>>>>>> The builds on x86 arch take 3-4 mins, but for aarch64
>>>>>>>>>>>>>>>>> (emulation!) ~40mins
>>>>>>>>>>>>>>>>> For now the jobs just build the .deb & .rpm packages for
>>>>>>>>>>>>>>>>> CentOS 7 and Ubuntu 18.04, both amd64 and aarch64.
>>>>>>>>>>>>>>>>> TODOs:
>>>>>>>>>>>>>>>>> - migrate Alpine
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Build on Alpine aarch64 fails with:
>>>>>>>>>>>>>> ...
>>>>>>>>>>>>>> automake: this behaviour will change in future Automake
>>>>>>>>>>>>>> versions: they will
>>>>>>>>>>>>>> automake: unconditionally cause object files to be placed in
>>>>>>>>>>>>>> the same subdirectory
>>>>>>>>>>>>>> automake: of the corresponding sources.
>>>>>>>>>>>>>> automake: project, to avoid future incompatibilities.
>>>>>>>>>>>>>> parallel-tests: installing 'build-aux/test-driver'
>>>>>>>>>>>>>> lib/libvmod_debug/Makefile.am:12: warning:
>>>>>>>>>>>>>> libvmod_debug_la_LDFLAGS multiply defined in condition TRUE ...
>>>>>>>>>>>>>> lib/libvmod_debug/automake_boilerplate.am:19: ...
>>>>>>>>>>>>>> 'libvmod_debug_la_LDFLAGS' previously defined here
>>>>>>>>>>>>>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/
>>>>>>>>>>>>>> automake_boilerplate.am' included from here
>>>>>>>>>>>>>> + autoconf
>>>>>>>>>>>>>> + CONFIG_SHELL=/bin/sh
>>>>>>>>>>>>>> + export CONFIG_SHELL
>>>>>>>>>>>>>> + ./configure '--prefix=/opt/varnish'
>>>>>>>>>>>>>> '--mandir=/opt/varnish/man' --enable-maintainer-mode
>>>>>>>>>>>>>> --enable-developer-warnings --enable-debugging-symbols
>>>>>>>>>>>>>> --enable-dependency-tracking --with-persistent-storage --quiet
>>>>>>>>>>>>>> configure: WARNING: dot not found - build will fail if svg
>>>>>>>>>>>>>> files are out of date.
>>>>>>>>>>>>>> configure: WARNING: No system jemalloc found, using system
>>>>>>>>>>>>>> malloc
>>>>>>>>>>>>>> configure: error: Could not find backtrace() support
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Does anyone know a workaround ?
>>>>>>>>>>>>>> I use multiarch/alpine:aarch64-edge as a base Docker image
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> - store the packages as CircleCI artifacts
>>>>>>>>>>>>>>>>> - anything else that is still missing
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Adding more architectures would be as easy as adding a new
>>>>>>>>>>>>>>>>> Dockerfile with a base image from the respective type.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> 2) Travis CI
>>>>>>>>>>>>>>>>>> 2.1) problems
>>>>>>>>>>>>>>>>>> 2.1.1) generally Travis is slower than Circle!
>>>>>>>>>>>>>>>>>> Althought if we use CircleCI 'machine' executor it will
>>>>>>>>>>>>>>>>>> be slower than the current 'Docker' executor!
>>>>>>>>>>>>>>>>>> 2.1.2) Travis supports only Ubuntu
>>>>>>>>>>>>>>>>>> Current setup at CircleCI uses CentOS 7.
>>>>>>>>>>>>>>>>>> I guess the build steps won't have problems on Ubuntu.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> 3) GitHub Actions
>>>>>>>>>>>>>>>>>> GH Actions does not support ARM64 but it supports self
>>>>>>>>>>>>>>>>>> hosted ARM64 runners
>>>>>>>>>>>>>>>>>> 3.1) The problem is that there is no way to make a self
>>>>>>>>>>>>>>>>>> hosted runner really private. I.e. if someone forks Varnish Cache any
>>>>>>>>>>>>>>>>>> commit in the fork will trigger builds on the arm64 node. There is no way
>>>>>>>>>>>>>>>>>> to reserve the runner only for commits against
>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Do you see other problems or maybe different ways ?
>>>>>>>>>>>>>>>>>> Do you have preferences which way to go ?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>>>>>> Martin
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Guillaume Quintard
>>>>>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>>>>>> varnish-dev mailing list
>>>>>>>>>>>>>>>>>>>> varnish-dev@varnish-cache.org
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
Re: Support for AARCH64 [ In reply to ]
> No, I wasn't aware of this discussion.
> The weekly package installed successfully now!
> Thank you!

And I lost track of this thread, but all's well that ends well ;-)

We are looking forward to feedback regarding the weeklies, please make
sure to upgrade frequently and let us know as soon as something goes
wrong.

Cheers,
Dridi
_______________________________________________
varnish-dev mailing list
varnish-dev@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev

1 2  View All