/kind bug
The tests should be able to be run on a workstation in order to improve developer velocity.
kubeadm version (use kubeadm version):
a very recent commit, https://github.com/kubernetes/kubernetes/commit/8b98e802eddb9f478ff7d991a2f72f60c165388a.
Environment:
kubectl version):uname -a):I ran KUBE_ROOT=$(pwd) go test ./cmd/kubeadm/... and received several failures
I expected the tests to pass
checkout a recent version, potentially only on os x or other systems that generally can't write to / without escalated permissions.
At least one error is because of platform differences. OS X will not pass the test we have for default configurations:
https://github.com/kubernetes/kubernetes/commit/e1fdaa177f60e280669aef94074a21efd0d928ea introduces the failing test on !linux systems.
on Ubuntu 17.10 the only tests that fail for me are:
k8s.io/kubernetes/cmd/kubeadm/app/util/system
but it's kind of expected my system to fail these.
also, are we actually supporting unit tests on non-linux?
could you please share the error output?
I don't know if we're supporting it, but it is very convenient to run the tests outside of CI, especially for new contributors.
Here is the complete failure output I've got
Test Failure output
$ KUBE_ROOT=$(pwd) go test ./cmd/kubeadm/...
? k8s.io/kubernetes/cmd/kubeadm [no test files]
? k8s.io/kubernetes/cmd/kubeadm/app [no test files]
ok k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/fuzzer (cached)
? k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/scheme [no test files]
ok k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1alpha3 (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1 (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/validation (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/cmd (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/cmd/alpha (cached)
? k8s.io/kubernetes/cmd/kubeadm/app/cmd/options [no test files]
ok k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade 0.052s
ok k8s.io/kubernetes/cmd/kubeadm/app/cmd/util (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/componentconfigs (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/constants (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/discovery (cached)
? k8s.io/kubernetes/cmd/kubeadm/app/discovery/file [no test files]
? k8s.io/kubernetes/cmd/kubeadm/app/discovery/https [no test files]
ok k8s.io/kubernetes/cmd/kubeadm/app/discovery/token (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/features (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/images (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/proxy (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/phases/bootstraptoken/clusterinfo (cached)
? k8s.io/kubernetes/cmd/kubeadm/app/phases/bootstraptoken/node [no test files]
ok k8s.io/kubernetes/cmd/kubeadm/app/phases/certs (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/phases/certs/renewal (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/phases/controlplane (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/phases/etcd (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/phases/kubelet (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/phases/markcontrolplane (cached)
? k8s.io/kubernetes/cmd/kubeadm/app/phases/patchnode [no test files]
ok k8s.io/kubernetes/cmd/kubeadm/app/phases/selfhosting (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/phases/uploadconfig (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/preflight (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/util (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/util/apiclient (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/util/audit (cached)
? k8s.io/kubernetes/cmd/kubeadm/app/util/certs [no test files]
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1alpha3, Kind=JoinConfiguration
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1alpha3, Kind=JoinConfiguration
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1alpha3, Kind=JoinConfiguration
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1alpha3, Kind=JoinConfiguration
--- FAIL: TestConfigFileAndDefaultsToInternalConfig (0.24s)
--- FAIL: TestConfigFileAndDefaultsToInternalConfig/incompleteYAMLToDefaultedv1beta1 (0.00s)
initconfiguration_test.go:123: the expected and actual output differs.
in: testdata/defaulting/master/incomplete.yaml
out: testdata/defaulting/master/defaulted.yaml
groupversion: kubeadm.k8s.io/v1beta1
diff:
--- expected
+++ actual
@@ -115,7 +115,6 @@
imagefs.available: 15%!
(MISSING) memory.available: 100Mi
nodefs.available: 10%!
(MISSING)- nodefs.inodesFree: 5%!
(MISSING) evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
FAIL
FAIL k8s.io/kubernetes/cmd/kubeadm/app/util/config 2.172s
ok k8s.io/kubernetes/cmd/kubeadm/app/util/config/strict 0.048s
? k8s.io/kubernetes/cmd/kubeadm/app/util/dryrun [no test files]
ok k8s.io/kubernetes/cmd/kubeadm/app/util/etcd (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/util/kubeconfig (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/util/pkiutil (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/util/pubkeypin (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/util/runtime (cached)
ok k8s.io/kubernetes/cmd/kubeadm/app/util/staticpod (cached)
CGROUPS_SYSTEM1: enabled
CGROUPS_SYSTEM2: enabled
CGROUPS_SYSTEM1: enabled
CGROUPS_SYSTEM2: enabled
CGROUPS_SYSTEM1: enabled
CGROUPS_SYSTEM2: missing
DOCKER_VERSION: 1.10.1
DOCKER_VERSION: 1.11.1
DOCKER_GRAPH_DRIVER: bad_driver
DOCKER_VERSION: 1.11.1
DOCKER_GRAPH_DRIVER: driver_1
DOCKER_VERSION: 1.12.1
DOCKER_GRAPH_DRIVER: driver_2
DOCKER_VERSION: 1.13.1
DOCKER_GRAPH_DRIVER: driver_2
DOCKER_VERSION: 17.03.0-ce
DOCKER_GRAPH_DRIVER: driver_2
DOCKER_VERSION: 17.06.0-ce
DOCKER_GRAPH_DRIVER: driver_2
DOCKER_VERSION: 17.09.0-ce
DOCKER_GRAPH_DRIVER: driver_2
DOCKER_VERSION: 18.06.0-ce
DOCKER_GRAPH_DRIVER: driver_2
DOCKER_VERSION: 18.09.1-ce
DOCKER_VERSION: 19.01.0
--- FAIL: TestValidateDockerInfo (0.00s)
--- FAIL: TestValidateDockerInfo/valid_Docker_version_18.09.1-ce (0.00s)
docker_validator_test.go:115:
Error Trace: docker_validator_test.go:115
Error: Expected nil, but got: this Docker version is not on the list of validated versions: 18.09.1-ce. Latest validated version: 18.09
Test: TestValidateDockerInfo/valid_Docker_version_18.09.1-ce
Messages: Expect error not to occur with docker info {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:driver_2 DriverStatus:[] SystemStatus:[] Plugins:{Volume:[] Network:[] Authorization:[] Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime: LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:<nil> NCPU:0 MemTotal:0 GenericResources:[] DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:false ServerVersion:18.09.1-ce ClusterStore: ClusterAdvertise: Runtimes:map[] DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[]}
KERNEL_VERSION: 3.19.9-99-test
KERNEL_VERSION: 4.4.14+
KERNEL_VERSION: 2.0.0
KERNEL_VERSION: 5.0.0
KERNEL_VERSION: 3.9.0
CONFIG_REQUIRED_1: enabled
CONFIG_REQUIRED_2: enabled (as module)
CONFIG_OPTIONAL_1: not set
CONFIG_OPTIONAL_2: not set
CONFIG_FORBIDDEN_1: not set
CONFIG_ALIASE_FORBIDDEN_2: not set
CONFIG_REQUIRED_1: disabled
CONFIG_REQUIRED_2: enabled
CONFIG_OPTIONAL_1: not set
CONFIG_OPTIONAL_2: not set
CONFIG_FORBIDDEN_1: not set
CONFIG_ALIASE_FORBIDDEN_2: not set
CONFIG_REQUIRED_1: enabled
CONFIG_ALIASE_REQUIRED_2: not set
CONFIG_OPTIONAL_1: not set
CONFIG_OPTIONAL_2: not set
CONFIG_FORBIDDEN_1: not set
CONFIG_ALIASE_FORBIDDEN_2: not set
CONFIG_REQUIRED_1: enabled
CONFIG_ALIASE_REQUIRED_2: enabled (as module)
CONFIG_OPTIONAL_1: not set
CONFIG_OPTIONAL_2: not set
CONFIG_FORBIDDEN_1: not set
CONFIG_ALIASE_FORBIDDEN_2: not set
CONFIG_REQUIRED_1: enabled
CONFIG_REQUIRED_2: enabled (as module)
CONFIG_OPTIONAL_1: enabled
CONFIG_OPTIONAL_2: not set
CONFIG_FORBIDDEN_1: not set
CONFIG_ALIASE_FORBIDDEN_2: not set
CONFIG_REQUIRED_1: enabled
CONFIG_REQUIRED_2: enabled (as module)
CONFIG_OPTIONAL_1: not set
CONFIG_OPTIONAL_2: not set
CONFIG_FORBIDDEN_1: disabled
CONFIG_ALIASE_FORBIDDEN_2: not set
CONFIG_REQUIRED_1: enabled
CONFIG_REQUIRED_2: enabled (as module)
CONFIG_OPTIONAL_1: not set
CONFIG_OPTIONAL_2: not set
CONFIG_FORBIDDEN_1: enabled - TEST FORBIDDEN
CONFIG_ALIASE_FORBIDDEN_2: not set
CONFIG_REQUIRED_1: enabled
CONFIG_REQUIRED_2: enabled (as module)
CONFIG_OPTIONAL_1: not set
CONFIG_OPTIONAL_2: not set
CONFIG_FORBIDDEN_1: enabled (as module) - TEST FORBIDDEN
CONFIG_ALIASE_FORBIDDEN_2: not set
CONFIG_REQUIRED_1: enabled
CONFIG_REQUIRED_2: enabled (as module)
CONFIG_OPTIONAL_1: not set
CONFIG_OPTIONAL_2: not set
CONFIG_FORBIDDEN_1: not set
CONFIG_ALIASE_FORBIDDEN_2: enabled (as module)
OS: Linux
OS: Windows
OS: Darwin
foo (>=1.0): 1.0.0
bar (>=2.0 <= 3.0): 2.1.0
foo (>=1.0): 1.0.0
bar (>=3.0): 2.1.0
baz (): not installed
bar-test-kernel-release (>=3.0): 3.0.0
FAIL
FAIL k8s.io/kubernetes/cmd/kubeadm/app/util/system 0.034s
? k8s.io/kubernetes/cmd/kubeadm/test [no test files]
--- FAIL: TestCmdInitToken (0.10s)
--- FAIL: TestCmdInitToken/valid_token_is_accepted (0.03s)
init_test.go:71:
CmdInitToken test case "valid token is accepted" failed with an error: error running /Users/cha/go/src/k8s.io/kubernetes/cluster/kubeadm.sh [init --dry-run --ignore-preflight-errors=all --token=abcdef.0123456789abcdef];
stdout "",
stderr "It looks as if you don't have a compiled kubeadm binary\n\nIf you are running from a clone of the git repo, please run\n'./build/run.sh make cross'. Note that this requires having\nDocker installed.\n\nIf you are running from a binary release tarball, something is wrong. \nLook at http://kubernetes.io/ for information on how to contact the \ndevelopment team for help.\n",
got error: exit status 1
command 'kubeadm init --token=abcdef.0123456789abcdef'
expected: true
err: false
--- FAIL: TestCmdInitKubernetesVersion (0.05s)
--- FAIL: TestCmdInitKubernetesVersion/valid_version_is_accepted (0.02s)
init_test.go:115:
CmdInitKubernetesVersion test case "valid version is accepted" failed with an error: error running /Users/cha/go/src/k8s.io/kubernetes/cluster/kubeadm.sh [init --dry-run --ignore-preflight-errors=all --kubernetes-version=1.13.0];
stdout "",
stderr "It looks as if you don't have a compiled kubeadm binary\n\nIf you are running from a clone of the git repo, please run\n'./build/run.sh make cross'. Note that this requires having\nDocker installed.\n\nIf you are running from a binary release tarball, something is wrong. \nLook at http://kubernetes.io/ for information on how to contact the \ndevelopment team for help.\n",
got error: exit status 1
command 'kubeadm init --kubernetes-version=1.13.0'
expected: true
err: false
--- FAIL: TestCmdInitConfig (0.17s)
--- FAIL: TestCmdInitConfig/can_load_v1alpha3_config (0.02s)
init_test.go:184:
CmdInitConfig test case "can load v1alpha3 config" failed with an error: error running /Users/cha/go/src/k8s.io/kubernetes/cluster/kubeadm.sh [init --dry-run --ignore-preflight-errors=all --config=testdata/init/v1alpha3.yaml];
stdout "",
stderr "It looks as if you don't have a compiled kubeadm binary\n\nIf you are running from a clone of the git repo, please run\n'./build/run.sh make cross'. Note that this requires having\nDocker installed.\n\nIf you are running from a binary release tarball, something is wrong. \nLook at http://kubernetes.io/ for information on how to contact the \ndevelopment team for help.\n",
got error: exit status 1
command 'kubeadm init --config=testdata/init/v1alpha3.yaml'
expected: true
err: false
--- FAIL: TestCmdInitConfig/can_load_v1beta1_config (0.02s)
init_test.go:184:
CmdInitConfig test case "can load v1beta1 config" failed with an error: error running /Users/cha/go/src/k8s.io/kubernetes/cluster/kubeadm.sh [init --dry-run --ignore-preflight-errors=all --config=testdata/init/v1beta1.yaml];
stdout "",
stderr "It looks as if you don't have a compiled kubeadm binary\n\nIf you are running from a clone of the git repo, please run\n'./build/run.sh make cross'. Note that this requires having\nDocker installed.\n\nIf you are running from a binary release tarball, something is wrong. \nLook at http://kubernetes.io/ for information on how to contact the \ndevelopment team for help.\n",
got error: exit status 1
command 'kubeadm init --config=testdata/init/v1beta1.yaml'
expected: true
err: false
--- FAIL: TestCmdInitCertPhaseCSR (0.07s)
--- FAIL: TestCmdInitCertPhaseCSR/generate_CSR (0.02s)
init_test.go:256: couldn't run kubeadm: error running /Users/cha/go/src/k8s.io/kubernetes/cluster/kubeadm.sh [init phase certs apiserver-kubelet-client --csr-only --csr-dir=/var/folders/f1/d_jnzpdj3wd7x054mzvzqtbh0000gn/T/857940999];
stdout "",
stderr "It looks as if you don't have a compiled kubeadm binary\n\nIf you are running from a clone of the git repo, please run\n'./build/run.sh make cross'. Note that this requires having\nDocker installed.\n\nIf you are running from a binary release tarball, something is wrong. \nLook at http://kubernetes.io/ for information on how to contact the \ndevelopment team for help.\n",
got error: exit status 1
--- FAIL: TestCmdInitCertPhaseCSR/fails_on_CSR (0.02s)
init_test.go:250: expected "It looks as if you don't have a compiled kubeadm binary\n\nIf you are running from a clone of the git repo, please run\n'./build/run.sh make cross'. Note that this requires having\nDocker installed.\n\nIf you are running from a binary release tarball, something is wrong. \nLook at http://kubernetes.io/ for information on how to contact the \ndevelopment team for help.\n" to contain "unknown flag: --csr-only"
--- FAIL: TestCmdInitCertPhaseCSR/fails_on_all (0.02s)
init_test.go:250: expected "It looks as if you don't have a compiled kubeadm binary\n\nIf you are running from a clone of the git repo, please run\n'./build/run.sh make cross'. Note that this requires having\nDocker installed.\n\nIf you are running from a binary release tarball, something is wrong. \nLook at http://kubernetes.io/ for information on how to contact the \ndevelopment team for help.\n" to contain "unknown flag: --csr-only"
--- FAIL: TestCmdInitAPIPort (0.09s)
--- FAIL: TestCmdInitAPIPort/accept_a_valid_port_number (0.02s)
init_test.go:303:
CmdInitAPIPort test case "accept a valid port number" failed with an error: error running /Users/cha/go/src/k8s.io/kubernetes/cluster/kubeadm.sh [init --dry-run --ignore-preflight-errors=all --apiserver-bind-port=6000];
stdout "",
stderr "It looks as if you don't have a compiled kubeadm binary\n\nIf you are running from a clone of the git repo, please run\n'./build/run.sh make cross'. Note that this requires having\nDocker installed.\n\nIf you are running from a binary release tarball, something is wrong. \nLook at http://kubernetes.io/ for information on how to contact the \ndevelopment team for help.\n",
got error: exit status 1
command 'kubeadm init --apiserver-bind-port=6000'
expected: true
err: false
--- FAIL: TestCmdTokenGenerate (0.02s)
token_test.go:53: 'kubeadm token generate' exited uncleanly: error running /Users/cha/go/src/k8s.io/kubernetes/cluster/kubeadm.sh [token generate];
stdout "",
stderr "It looks as if you don't have a compiled kubeadm binary\n\nIf you are running from a clone of the git repo, please run\n'./build/run.sh make cross'. Note that this requires having\nDocker installed.\n\nIf you are running from a binary release tarball, something is wrong. \nLook at http://kubernetes.io/ for information on how to contact the \ndevelopment team for help.\n",
got error: exit status 1
--- FAIL: TestCmdVersion (0.07s)
--- FAIL: TestCmdVersion/short_output (0.02s)
version_test.go:58: failed CmdVersion running 'kubeadm version --output=short' with an error: error running /Users/cha/go/src/k8s.io/kubernetes/cluster/kubeadm.sh [version --output=short];
stdout "",
stderr "It looks as if you don't have a compiled kubeadm binary\n\nIf you are running from a clone of the git repo, please run\n'./build/run.sh make cross'. Note that this requires having\nDocker installed.\n\nIf you are running from a binary release tarball, something is wrong. \nLook at http://kubernetes.io/ for information on how to contact the \ndevelopment team for help.\n",
got error: exit status 1
expected: true
actual: false
version_test.go:73: 'kubeadm version --output=short' stdout did not match expected regex; wanted: ["^v.+\n$"], got: []
--- FAIL: TestCmdVersion/default_output (0.02s)
version_test.go:58: failed CmdVersion running 'kubeadm version ' with an error: error running /Users/cha/go/src/k8s.io/kubernetes/cluster/kubeadm.sh [version ];
stdout "",
stderr "It looks as if you don't have a compiled kubeadm binary\n\nIf you are running from a clone of the git repo, please run\n'./build/run.sh make cross'. Note that this requires having\nDocker installed.\n\nIf you are running from a binary release tarball, something is wrong. \nLook at http://kubernetes.io/ for information on how to contact the \ndevelopment team for help.\n",
got error: exit status 1
expected: true
actual: false
version_test.go:73: 'kubeadm version ' stdout did not match expected regex; wanted: ["^kubeadm version: &version\\.Info{Major:\".+\", Minor:\".+\", GitVersion:\".+\", GitCommit:\".+\", GitTreeState:\".+\", BuildDate:\".+\", GoVersion:\".+\", Compiler:\".+\", Platform:\".+\"}\n$"], got: []
panic: interface conversion: interface {} is nil, not map[string]interface {} [recovered]
panic: interface conversion: interface {} is nil, not map[string]interface {}
goroutine 210 [running]:
testing.tRunner.func1(0xc00010b800)
/Users/cha/.gimme/versions/go1.11.5.darwin.amd64/src/testing/testing.go:792 +0x387
panic(0x1c6f840, 0xc0003b2000)
/Users/cha/.gimme/versions/go1.11.5.darwin.amd64/src/runtime/panic.go:513 +0x1b9
k8s.io/kubernetes/cmd/kubeadm/test/cmd.TestCmdVersionOutputJsonOrYaml.func1(0xc00010b800)
/Users/cha/go/src/k8s.io/kubernetes/cmd/kubeadm/test/cmd/version_test.go:125 +0x8d2
testing.tRunner(0xc00010b800, 0xc000427ce0)
/Users/cha/.gimme/versions/go1.11.5.darwin.amd64/src/testing/testing.go:827 +0xbf
created by testing.(*T).Run
/Users/cha/.gimme/versions/go1.11.5.darwin.amd64/src/testing/testing.go:878 +0x35c
FAIL k8s.io/kubernetes/cmd/kubeadm/test/cmd 1.421s
? k8s.io/kubernetes/cmd/kubeadm/test/kubeconfig [no test files]
on linux these are passing for me:
? k8s.io/kubernetes/cmd/kubeadm/test [no test files]
ok k8s.io/kubernetes/cmd/kubeadm/test/cmd 87.254s
stderr "It looks as if you don't have a compiled kubeadm binary\n\nIf you are running from a clone of the git repo, please run\n'./build/run.sh make cross'. Note that this requires having\nDocker installed.\n\nIf you are running from a binary release tarball, something is wrong. \nLook at http://kubernetes.io/ for information on how to contact the \ndevelopment team for help.\n",
do you have a compiled binary?
what happens if you use ./hack/make-rules/test-kubeadm.sh instead.
(it should build a binary)
It looks like the tests must be 1) run as root 2) run from hack/make-rules/test-kubeadm-cmd.sh and not using the standard go testing tools 3) run on linux systems only 4) run on a system with a validated docker version.
This maybe is a documentation issue. Is there a developer's guide that documents this? Perhaps I missed a readme? I'm hoping it's my fault and not tribal knowledge
This maybe is a documentation issue. Is there a developer's guide that documents this? Perhaps I missed a readme? I'm hoping it's my fault and not tribal knowledge
i found this the hard way. afaik there is no guide, but this is something for contrib-ex to consider, perhaps.
Ok. I'll close this in favor of a new issue outlining how we could make this process less painful for new contributors (and existing contributors that haven't been paying terribly close attention >.<)
Most helpful comment
Ok. I'll close this in favor of a new issue outlining how we could make this process less painful for new contributors (and existing contributors that haven't been paying terribly close attention >.<)