Kind: Support arm64

Created on 11 Dec 2018  ยท  59Comments  ยท  Source: kubernetes-sigs/kind

Device under test is a Packet c1.large.arm 96-core arm64 machine running Ubuntu 18.04.

ed@ed-2a-bcc-llvm:~$ go version
go version go1.11.2 linux/arm64
ed@ed-2a-bcc-llvm:~$ go get sigs.k8s.io/kind
ed@ed-2a-bcc-llvm:~$ go/bin/kind create cluster
Creating cluster 'kind-1' ...
 โœ“ Ensuring node image (kindest/node:v1.12.2)  
 โœ“ [kind-1-control-plane] Creating node container ๐Ÿ“ฆ 
 โœ— [kind-1-control-plane] Fixing mounts ๐Ÿ—ป 
Error: failed to create cluster: exit status 1
Usage:  
  kind create cluster [flags]

Flags:  
      --config string   path to a kind config file
  -h, --help            help for cluster
      --image string    node docker image to use for booting the cluster
      --name string     cluster context name (default "1")
      --retain          retain nodes for debugging when cluster creation fails
      --wait duration   Wait for control plane node to be ready (default 0s)

Global Flags:
      --loglevel string   logrus log level [panic, fatal, error, warning, info, debug] (default "warning")

failed to create cluster: exit status 1
ed@ed-2a-bcc-llvm:~$ docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:52:41 2018
 OS/Arch:           linux/arm64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:17:01 2018
  OS/Arch:          linux/arm64
  Experimental:     false
help wanted kinfeature lifecyclfrozen prioritbacklog

Most helpful comment

@rosti thanks -- I'd like to more generally re-do the UX for building node images to allow more control over the sources (and indeed have started on that), I'd be OK with this but only if people understood that it's a stop-gap measure and the interface will be changing. (--type was already going to quietly be removed as the result of upstream build changes ๐Ÿ™ƒ ).

Also, I'm not sure we'd release images this way, they're substantially larger than our builds.


FYI I'm planning to take on ARM as a 2021 project with the new spate of developer hardware, and 2020 nearly closing / work already planned.

I'll comment more on #381

All 59 comments

Looking at https://github.com/kubernetes-sigs/kind/blob/master/pkg/cluster/context.go#L196-L206 where the code errors out, there's a TODO for logging from @BenTheElder .

Looks like the underlying reason is that kindest/node is not multiarchitecture.

ed@ed-2a-bcc-llvm:~$ docker run --rm mplatform/mquery kindest/node:v1.12.2
Image: kindest/node:v1.12.2
 * Manifest List: No
 * Supports: amd64/linux

Yes, now we only hava amd64 architecture image. https://github.com/kubernetes-sigs/kind/blob/master/images/base/Dockerfile#L28-L33

/priority important-longterm
/kind feature

@dims hacked up a working version of this: https://paste.fedoraproject.org/paste/gdlF9fqXeSADK-aPN-sEbw/raw :tada:
I think we might want to put a little more thought into how to do this well like #188, but this should be doable. Tentatively tracking this for kind 1.0 in Q1 2019.

We're yet to see details around this announcement, but it ought to simplify the process of building ARM images significantly: https://techcrunch.com/2019/04/24/docker-partners-with-arm/

yes!

also somehow forgot to update this issue, we have arm64 support, just no published images yet (that will need some more thinking...) if you build images yourself kind should work on arm64 today ๐Ÿ˜…

somewhat, we then either have to publish different images for arm or sort
out image manifests and what that pipeline looks like.
ideally we don't always require building both but do when we publish
pre-built images.

there's some other options where we stop needing RUN outside of the base
image that would make this all simpler as well.

On Wed, Apr 24, 2019 at 10:15 PM Peter Benjamin notifications@github.com
wrote:

Once we can cross-build images for ARM from x86 with native docker
tooling, would that solve the image publishing problem?

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes-sigs/kind/issues/166#issuecomment-486522493,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAHADKYJ3JR7PJ4SNNFQP5LPSE477ANCNFSM4GJSJUKQ
.

My fervent hope is that the new Docker tooling will make multi-architecture manifests much easier to produce. The current setup for most projects is just a bit complex.

So FWIW I did figure out how to work manifest-tool I think, this definitely seems feasible this year, if a bit clunky, the trickiest part now is we need to write some tooling to cross compile the kind image (or coordinate with kicking off a build on packet or ... ๐Ÿค”).

I think I'd like to get this into GCB based publishing with a cross-compile so we can start automating publishing mulit-arch node images, I punted looking further into that while working on the breaking image changes in #461, but those are in now :-)

@BenTheElder we can use CircleCI or Travis to do that, I see that another project under kubernetes-sig has the integrations enabled https://github.com/kubernetes-sigs/kubeadm-dind-cluster

I have experience building pipelines on those, it will be relatively easy to create a pipeline there to automatically publish the images based on PR or per tag

We want them to be pushed based on Kubernetes repo changes instead.

The tricky part will not be setting up Travis etc., We have Prow, GCB, ...
but we need the credential to not be available to presubmits, we need the
triggering to be right, and we need to build even for PPC64LE ideally,
which I doubt Travis etc. are running natively on, so we need to cross
compile.

Kubernetes uses some tricks for doing this w/ binfmt_misc and qemu that
work very portably.

From:Antonio Ojea notifications@github.com
Date:Fri, May 3, 2019, 02:04
To:kubernetes-sigs/kind
Cc:Benjamin Elder, Mention

@BenTheElder https://github.com/BenTheElder we can use CircleCI or Travis

to do that, I see that another project under kubernetes-sig has the
integrations enabled
https://github.com/kubernetes-sigs/kubeadm-dind-cluster

I have experience building pipelines on those, it will be relatively easy
to create a pipeline there to automatically publish the images based on PR
or per tag

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes-sigs/kind/issues/166#issuecomment-489020422,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAHADK7V2OSSMKVES7RIO7LPTP53RANCNFSM4GJSJUKQ
.

hmm, I think I have to read more about prow https://github.com/kubernetes/test-infra/blob/master/prow/jobs.md

Minor update: I've ensured the ip-masq-agent pushes multi-arch (manifest) images upstream before we adopted it, and our own tiny networking daemon is cross compiled and pushing multi-arch images.

These were simpler than Kubernetes node images, but at least we have some more multi-arch samples and we continue to nominally work on arm64.

I think the next step is to support building node images from Kubernetes release tarballs so we can consume Kubernetes's upsteam cross compilation output and save time building & publishing. Then we start building manifest list images from those.

@BenTheElder Can you take a look and provide current status for this? I'd very much like to identify the list of dependencies to get this work complete, so that we can reasonably assess whether it has any chance to be ready for Kubecon or Rejekts, and if so assess that against all other things.

Hi, sorry I missed this comment.

KIND should work fine in that most of what we publish and all of the dependencies work on arm BUT you must build your own base + node image..

These are somewhat expensive and complex builds and the demand is low so it has not been worth making our build even more complex and slow and reducing our velocity at the moment.

Interested parties can build their own, which is similarly required if you want completely arbitrary Kubernetes versions at the moment.

We do prebuild a handful of node images at each release (one for each lastest patch version of 1.11.X forward) but we aren't doing broad pre-building _yet_.

CI is problematic, the Kubernetes project's CI does not run natively on ARM and I personally do not have time to maintain anything further, I don't think out other regular contributors do either, and as the Kubernetes infra work group is not even close to having the community run CI etc....

There was contributed CI set up previously, but it has not been kept maintained so we can't rely on the signal. https://testgrid.k8s.io/conformance-all#kind,%20v1.14%20(dev,%20ARM64)

I'm very interested in this for ppc64le and have colleagues who want to use this for s390x. Based on the latest comment here, we'll see if we can build the images ourselves, but it would be awesome if we could just pull them from docker hub in the future, like you can for amd64.

@BenTheElder earlier you wrote:

we need to build even for PPC64LE ideally, which I doubt Travis etc. are running natively on, so we need to cross compile.

so I just wanted to let you know... Travis does actually support native builds for ppc64le and s390x now. Here's the announcement from last year: https://blog.travis-ci.com/2019-11-12-multi-cpu-architecture-ibm-power-ibm-z

unfortunately currently kubernetes CI is not on something like travis and there is significant friction to using it: https://github.com/kubernetes/test-infra/tree/master/prow

right now we're not pushing a lot of images to docker hub cross build or otherwise because we still have to round out some things with testing kubernetes itself for which this is not related, but I'd be willing to look at starting with the base image at least for cross builds. we need to build that infrequently and the build is not super expensive.

we do cross build for ~all architectures oureselves already:

  • containerd for use in the nodes (at our own semi-significant effort as-is)
  • the loadbalancer / haproxy image for HA
  • the kindnetd image for CNI / node to node routing

node image build is more problematic.

tenative exploration of base image cross build. https://github.com/kubernetes-sigs/kind/pull/1366
this is really slow by comparison even on my workstation only adding amd64 and we currently have no way to verify that the images work on those architectures, so I'm not committing to it yet.

Hi @BenTheElder we have successfully run kind-e2e on ARM, and plane to submit the test result to testgirds. Any other help we can do to?

Hi, I think a great next step would be to look at the kube-cross image, bazel is looking like it will be removed upstream and the cross image is now built with community infra.

Hi @BenTheElder , I saw there is a PR (link) of building kube-cross image for multi-arch. I will do some research on it.

https://github.com/kubernetes-sigs/kind/pull/1346 fixes image preloading when the image is not properly metada-ed as arm (many cross built images are only correct in the manifest list), we fixed this in the pause image at least.

@BenTheElder - you note that "many cross built images are only correct in the manifest list" - can you provide an example of same? I suspect that means figuring out the pattern for the cross-built image creation and fixing that build process many places upstream.

see #1346 discussion / back links, if you do a normal docker build where you do from <some other arch image> w/ quemu, the manifest metadata inherits somewhat from the host instead of the intended arch. buildx doesn't seem to have this issue. the manifest list will point to the right image but that image's manifest will have the wrong arch.

this broke users with bleeding edge cri-o and kubernetes because cri-o started validating more of the metadata

I'm starting to re-evaluate how we can support platforms again, though it is on my backburner.
Is there any reason to expect ARM (not ARM64) users? It looks like these day even raspberry pi is arm64?

The primary OS for Raspberry Pi is 32-bit with a 64-bit beta, if it's trivial to support armhf (32-bit) then you should, and you'll make a lot of users (millions) happy.

If you need any help @BenTheElder then don't hesitate to ask - almost all of the OpenFaaS projects and utilities are multi-arch.

thanks. that's unfortunate re: rpi OS

I have multi-arch builds for most things, but the node images are not
normal builds and are a bit expensive atm (building kubernetes...).
building for armhf locally even on my large workstation is especially slow
for every image so far, we may revisit this in the future when we have less
need for kind iteration velocity.

right now upstream CI health is at its worst so we will not be able to put
more into this for a bit.

On Thu, Jun 18, 2020 at 6:50 AM Alex Ellis notifications@github.com wrote:

The primary OS for Raspberry Pi is 32-bit with a 64-bit beta, if it's
trivial to support armhf (32-bit) then you should, and you'll make a lot of
users (millions) happy.

If you need any help @BenTheElder https://github.com/BenTheElder then
don't hesitate to ask - almost all of the OpenFaaS projects and utilities
are multi-arch.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes-sigs/kind/issues/166#issuecomment-646029140,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAHADK5AGZNRRY2ERC3VCJ3RXILQ3ANCNFSM4GJSJUKQ
.

Why is it unfortunate that armhf is still in use? You'd be doing the community a service even publishing an ARM64 image.

er, it's unfortunate that arm32 is still in use for the reasons mentioned above (build time).

we're already building the base image etc. for multiple platforms.

the node image is not possible to cross build at the moment (it can't be accomplished with a dockerimage so no throwing buildx at it)

is the node had already support arm64?

is the node had already support arm64?

Kind can run on arm64, but you need to build node-image by yourself.

Hello, I am attempting to build my own node-image on an arm64 host.
It looks like the kind build node-image starts with the file build/run.sh in the kubernetes source, which starts with
FROM k8s.gcr.io/build-image/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG ... which does not provide an arm64 image.

Do I need to actually _make_ the entirety of kubernetes first before attempting to build a node-image?

Any help would be greatly appreciated. Thanks!

Terminal output:

[ssm-user bin]$ arch
aarch64
[ssm-user bin]$ ./kind build node-image
Starting to build Kubernetes
+++ [0914 18:13:40] Verifying Prerequisites....
+++ [0914 18:13:40] Building Docker image kube-build:build-66fac25301-5-v1.15.0-1
+++ Docker build command failed for kube-build:build-66fac25301-5-v1.15.0-1

Sending build context to Docker daemon  8.704kB
Step 1/16 : FROM k8s.gcr.io/build-image/kube-cross:v1.15.0-1
v1.15.0-1: Pulling from build-image/kube-cross
no matching manifest for linux/arm64/v8 in the manifest list entries

To retry manually, run:

docker build -t kube-build:build-66fac25301-5-v1.15.0-1 --pull=false /home/ssm-user/go/src/k8s.io/kubernetes/_output/images/kube-build:build-66fac25301-5-v1.15.0-1

!!! [0914 18:13:40] Call tree:
!!! [0914 18:13:40]  1: build/run.sh:31 kube::build::build_image(...)
Failed to build Kubernetes: failed to build binaries: command "build/run.sh make all 'WHAT=cmd/kubeadm cmd/kubectl cmd/kubelet'" failed with error: exit status 1
ERROR: error building node image: failed to build kubernetes: failed to build binaries: command "build/run.sh make all 'WHAT=cmd/kubeadm cmd/kubectl cmd/kubelet'" failed with error: exit status 1
Command Output: +++ [0914 18:13:40] Verifying Prerequisites....
+++ [0914 18:13:40] Building Docker image kube-build:build-66fac25301-5-v1.15.0-1
+++ Docker build command failed for kube-build:build-66fac25301-5-v1.15.0-1

Sending build context to Docker daemon  8.704kB
Step 1/16 : FROM k8s.gcr.io/build-image/kube-cross:v1.15.0-1
v1.15.0-1: Pulling from build-image/kube-cross
no matching manifest for linux/arm64/v8 in the manifest list entries

To retry manually, run:

docker build -t kube-build:build-66fac25301-5-v1.15.0-1 --pull=false /home/ssm-user/go/src/k8s.io/kubernetes/_output/images/kube-build:build-66fac25301-5-v1.15.0-1

!!! [0914 18:13:40] Call tree:
!!! [0914 18:13:40]  1: build/run.sh:31 kube::build::build_image(...)

Do I need to actually make the entirety of kubernetes first before attempting to build a node-image?

No, we wrap the upstream build inside kind build node-image.

It looks like the kind build node-image starts with the file build/run.sh in the kubernetes source, which starts with
FROM k8s.gcr.io/build-image/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG ... which does not provide an arm64 image.

Yes, parts of the upstream build only support building from amd64 currently. There are upstream issues discussing thi.s

Right now you'll have to build using bazel I think. There's an option for that in `kind build node-image.

@vielmetti @BenTheElder @scottmalkie build in arm64 arch.
'''
kind build node-image --image node:v1.18 \
--kube-root /root/go/src/k8s.io/kubernetes-release-1.18 \
--type bazel \
--base-image kindest/base:v20200826-5c3ff118
'''

got this error .How to solve it? thx

'''
INFO: Repository debian-iptables-arm64 instantiated at:
no stack (--record_rule_instantiation_callstack not enabled)
Repository rule container_pull defined at:
/root/.cache/bazel/_bazel_root/ebf03545b4e1140a2d71eccbc6690d4b/external/io_bazel_rules_docker/container/pull.bzl:173:33: in
INFO: Repository debian-base-arm64 instantiated at:
no stack (--record_rule_instantiation_callstack not enabled)
Repository rule container_pull defined at:
/root/.cache/bazel/_bazel_root/ebf03545b4e1140a2d71eccbc6690d4b/external/io_bazel_rules_docker/container/pull.bzl:173:33: in
ERROR: An error occurred during the fetch of repository 'debian-iptables-arm64':
Traceback (most recent call last):
File "/root/.cache/bazel/_bazel_root/ebf03545b4e1140a2d71eccbc6690d4b/external/io_bazel_rules_docker/container/pull.bzl", line 128, column 71, in _impl
fail("Pull command failed: %s (%s)" % (result.stderr, " ".join(args)))
Error in join: expected string for sequence element 0, got 'path'
ERROR: /root/go/src/k8s.io/kubernetes-release-1.18/build/BUILD:61:22: //build:kube-proxy-internal depends on @debian-iptables-arm64//image:image in repository @debian-iptables-arm64 which failed to fetch. no such package '@debian-iptables-arm64//image': expected string for sequence element 0, got 'path'
ERROR: Analysis of target '//build:docker-artifacts' failed; build aborted: Analysis failed
INFO: Elapsed time: 26.151s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (117 packages loaded, 1535 targets configured)
FAILED: Build did NOT complete successfully (117 packages loaded, 1535 targets configured)

'''

@scottmalkie build success? Can you share it?

Yes that image build for ARM64 would be really useful if you can share

There have any official arm64 image ?

I believe building using Bazel currently can't work on ARM64 due to https://github.com/bazelbuild/rules_docker/issues/1155

๐Ÿค” Bazel shipped arm64 support for Envoy, and distroless is building (and testing) containers on arm64 with rules_docker.

I see, the problem is container_pull. It looks like binaries were added for arm64, and s390x, but never added to the workspace rules. This should be relatively straightforward to fix. Happy to help review PRs.

@BenTheElder is it possible to build node image using the K8s released binaries (https://dl.k8s.io/v1.19.0/kubernetes-node-linux-arm64.tar.gz)?

@tcnghia not currently no. that would be https://github.com/kubernetes-sigs/kind/issues/381 which is pending a fleshed out design.
there will also be some drawbacks (e.g. our binaries are not built the same as the default builds, your images will be larger etc.)

There are upstream discussions in https://github.com/kubernetes/kubernetes about building Kubernetes for other platforms w/o cross building from AMD64, please try to participate there if you're working on this aspect.

The upstream build / testing efforts for non-amd64 platforms are still under developed enough that with my Kubernetes hat on (not KIND) I'm inclined to continue suggesting that the project should not ship these binaries anyhow as they are in no way qualified before release.

With my KIND hat on, since they are shipped for now, and downstream of Kubernetes people want to themselves test on these platforms, it makes sense to try to support it in KIND ...

.. but as long as Kubernetes has no release-blocking tests on any non-amd64 platforms many of us are unlikely to be interested in expending a lot of effort on these platforms. Interested vendors have come and gone without working with the release team on this and the KIND maintainers have barely had time to sort out more pressing and obviously useful / supportable concerns.

@u2bo @MichaelCade I confirmed that kind build node-image --type bazel will work after bazelbuild/rules_docker#1155 is fixed. This is what I do:

  1. checkout K8s source code at 1.19 or later.
  2. apply these changes https://github.com/tcnghia/kubernetes/compare/master...tcnghia:use-arm64-rules-docker (or you can also create a patched rules_docker yourself based on the change here https://github.com/bazelbuild/rules_docker/compare/v0.14.4...tcnghia:arm64-old).
  3. go to src/go/k8s.io/kubernetes
  4. run kind build node-image --type bazel
  5. tag the newly built kindest/node image and push to your repo.

@u2bo @MichaelCade I confirmed that kind build node-image --type bazel will work after bazelbuild/rules_docker#1155 is fixed. This is what I do:

  1. checkout K8s source code at 1.19 or later.
  2. apply these changes tcnghia/[email protected]:use-arm64-rules-docker (or you can also create a patched rules_docker yourself based on the change here bazelbuild/[email protected]:arm64-old).
  3. go to src/go/k8s.io/kubernetes
  4. run kind build node-image --type bazel
  5. tag the newly built kindest/node image and push to your repo.

@tcnghia use your config build and get some error. it's sames like kubelet file not output ?

ERROR: /root/go/src/k8s.io/kubernetes/cmd/kubelet/BUILD:10:10: GoLink cmd/kubelet/kubelet_/kubelet failed (Exit 1): builder failed: error executing command 
  (cd /root/.cache/bazel/_bazel_root/013dd613323da188b43f955d6f89ac02/sandbox/linux-sandbox/5718/execroot/io_k8s_kubernetes && \
  exec env - \
    CGO_ENABLED=1 \
    GOARCH=arm64 \
    GOOS=linux \
    GOPATH='' \
    GOROOT=external/go_sdk \
    GOROOT_FINAL=GOROOT \
    PATH=/usr/bin:/bin \
  bazel-out/host/bin/external/go_sdk/builder '-param=bazel-out/aarch64-fastbuild-ST-f6ff168d88b9/bin/cmd/kubelet/kubelet_/kubelet-0.params' -- -extld /usr/bin/gcc '-buildid=redacted' -extldflags '-fuse-ld=gold -Wl,-no-as-needed -Wl,-z,relro,-z,now -B/usr/bin -pass-exit-codes -lm')
Execution platform: @local_config_platform//:host

Use --sandbox_debug to see verbose messages from the sandbox builder failed: error executing command 
  (cd /root/.cache/bazel/_bazel_root/013dd613323da188b43f955d6f89ac02/sandbox/linux-sandbox/5718/execroot/io_k8s_kubernetes && \
  exec env - \
    CGO_ENABLED=1 \
    GOARCH=arm64 \
    GOOS=linux \
    GOPATH='' \
    GOROOT=external/go_sdk \
    GOROOT_FINAL=GOROOT \
    PATH=/usr/bin:/bin \
  bazel-out/host/bin/external/go_sdk/builder '-param=bazel-out/aarch64-fastbuild-ST-f6ff168d88b9/bin/cmd/kubelet/kubelet_/kubelet-0.params' -- -extld /usr/bin/gcc '-buildid=redacted' -extldflags '-fuse-ld=gold -Wl,-no-as-needed -Wl,-z,relro,-z,now -B/usr/bin -pass-exit-codes -lm')
Execution platform: @local_config_platform//:host

Use --sandbox_debug to see verbose messages from the sandbox
link: error running subcommand: signal: killed
INFO: Elapsed time: 64.553s, Critical Path: 61.90s
INFO: 1 process: 1 linux-sandbox.
FAILED: Build did NOT complete successfully
FAILED: Build did NOT complete successfully

@u2bo I created a .travis.yml here to build the node image https://github.com/tcnghia/playground/blob/master/.travis.yml (see successful build here) . If you like you can clone that file and add a step to push the built image to your own repository. Obviously, this isn't officially supported.

@vielmetti I've sent PR https://github.com/bazelbuild/rules_docker/pull/1671 to rules_docker. Once that merges & releases, if you can help bring https://github.com/kubernetes/kubernetes to use the latest release of rules_docker I think we are good to go with building KinD arm64 node images using kind build node-image --type bazel.

@u2bo I created a .travis.yml here to build the node image https://github.com/tcnghia/playground/blob/master/.travis.yml (see successful build here) . If you like you can clone that file and add a step to push the built image to your own repository. Obviously, this isn't officially supported.

@vielmetti I've sent PR bazelbuild/rules_docker#1671 to rules_docker. Once that merges & releases, if you can help bring https://github.com/kubernetes/kubernetes to use the latest release of rules_docker I think we are good to go with building KinD arm64 node images using kind build node-image --type bazel.

@tcnghia thanks very much. can you share your image?

@u2bo docker.io/tcnghia/kindest-arm64-node:1.19.3 should work

Bazel's rules_docker has been updated, at https://github.com/bazelbuild/rules_docker/pull/1671 Thanks @tcnghia for that update. That PR is not in the 0.15.0 release of November 2, 2020, but I'd expect it to land in 0.15.1 whenever that comes out. The rules_docker releases are at https://github.com/bazelbuild/rules_docker/releases

I'll need some help navigating kubernetes/kubernetes to get this updated. It looks like the latest update to k/k rules_docker is at https://github.com/kubernetes/kubernetes/pull/93981 from @BenTheElder . Since k/k tends to use specific releases, we'll have to wait until a rules_docker release to then promote this change.

Thanks @vielmetti. The main issue I saw when trying to build with latest rules_docker is the latest breaking change from rules_python, which rules_docker depends on.

@BenTheElder I could send a PR for a Travis workflow to build for arm64 (and may be s390x) node images without cross-building. Would that be useful? I believe Travis is free for OSS up to a quota. Also, can you please add references to the upstream discussion about building on other platforms? Thanks!

Also, can you please add references to the upstream discussion about building on other platforms? Thanks!

https://github.com/kubernetes/community/pull/5300

Hi @BenTheElder & @munnerz , I created a patch over the weekend that adds the ability to generate a node image from the released k8s binary tar balls. Specifically, I wanted to avoid the need of recompiling Kubernetes on ARM just to generate a node image for kind clusters, but instead rely on the official binaries.

The workflow looks like this:

wget https://dl.k8s.io/v1.20.0/kubernetes-server-linux-arm64.tar.gz
tar zxf kubernetes-server-linux-arm64.tar.gz
kind build node-image --type bindir --kube-root kubernetes/server/bin

Are you folks interested in a PR?

@rosti your PR would help with https://github.com/kubernetes-sigs/kind/issues/381 I believe

Thanks @tcnghia , moved the discussion over to #381

@rosti thanks -- I'd like to more generally re-do the UX for building node images to allow more control over the sources (and indeed have started on that), I'd be OK with this but only if people understood that it's a stop-gap measure and the interface will be changing. (--type was already going to quietly be removed as the result of upstream build changes ๐Ÿ™ƒ ).

Also, I'm not sure we'd release images this way, they're substantially larger than our builds.


FYI I'm planning to take on ARM as a 2021 project with the new spate of developer hardware, and 2020 nearly closing / work already planned.

I'll comment more on #381

FYI I'm planning to take on ARM as a 2021 project with the new spate of developer hardware, and 2020 nearly closing / work already planned.

I'll comment more on #381

Hi, Ben I am happy to provide help. Please feel free to tell me.

For now I won't be sending a PR for my node image generation patch as Ben has intentions to revamp the image generation code and add this capability in a different manner.

In the meantime I have uploaded my images to Docker Hub (see the overview there for details).

Was this page helpful?
0 / 5 - 0 ratings