Bazel: incompatible_use_python_toolchains: The Python runtime is obtained from a toolchain rather than a flag

Created on 29 Mar 2019  路  37Comments  路  Source: bazelbuild/bazel

Flag: --incompatible_use_python_toolchains
Available since: 0.25
Will be flipped in: 0.27
Feature tracking issue: #7375

FAQ (common problems)

I'm getting Python 2 vs 3 errors

This flag fixes #4815 on non-Windows platforms, so your code might now be running under a different version of Python than it was in previous Bazel versions. You may notice this as a Python stack trace complaining about bad print syntax, problems with bytes vs str (encode/decode), unknown imports, etc.

In order for your code to run under the proper version of Python, make sure Python 2 binaries and tests have the attribute python_version = "PY2" (the default is PY3).

For targets that are built in the host configuration (for example, genrule tools in particular), python_version has no effect. It is currently impossible for PY2 and PY3 host-configured targets to co-exist in the same build; they will always be overridden to one or the other, depending on the value of --host_force_python. This incompatible change does not affect how the host config works, it just makes it so targets actually run with the version the host config specifies. If you (or your dependencies) have host-configured tools that require Python 2, and which are now failing because they're running under Python 3, add --host_force_python=PY2 to your bazelrc (the default value is PY3).

Bazel 0.27 introduces a diagnostic message when a host-configured tool fails at run time (non-zero exit code), alerting you when it may be necessary to set this flag.

The default Python toolchain can't find the interpreter

If you get an error like this:

Error: The default python toolchain (@bazel_tools//tools/python:autodetecting_toolchain) was unable to locate a suitable Python interpreter on the target platform at execution time. Please register an appropriate Python toolchain. [...]
Failure reason: Cannot locate 'python3' or 'python' on the target platform's PATH, which is: [...]

Determine whether you have python2, python3, and/or python on your shell PATH. For py_test targets, and for py_binary targets used as tools (in genrules, etc.), also check whether your PATH is being manipulated by the flags --incompatible_strict_action_env and/or --action_env=PATH=[...]. For instance, the strict action environment does not include /usr/local/bin in PATH by default, which is where python3 is typically located on Mac, if it is installed at all. See also #8536.

If modifying your PATH is not feasible, try defining and registering your own Python toolchain as described at the bottom of this post.

I don't have Python 3 installed (e.g. default Mac environment)

Previously, if you didn't have a Python 3 interpreter but all your code was compatible with Python 2, Bazel would happily analyze it as PY3 and execute it using a Python 2 python command. Now this breaks because the autodetecting toolchain validates that python is actually Python 3.

The ideal solution is to not depend on Python 3 code, or else install a Python 3 environment on the target system. The practical workaround is to opt out of version checking by using the non-strict autodetecting toolchain. The error message tells you how: Add to your bazelrc

build --extra_toolchains=@bazel_tools//tools/python:autodetecting_toolchain_nonstrict

Note that you will not benefit from the fix to #4815 as long as you are using this toolchain.

If you're using a custom Python toolchain (using py_runtime_pair, as described at the bottom of this post), you can have the py3_runtime attribute point to a py_runtime that declares itself as PY3 but in actuality references a Python 2 interpreter. This abuse of version information achieves the same result: PY3-analyzed targets get run with a Python 2 interpreter.

Neither of these approaches is recommended for anyone but end-users, since they affect how Python targets get run globally throughout the build.

I'm a rule author and I want my target to run regardless of whether the downstream user has Python 2 or 3

See this comment.

Did the behavior of toolchains change between 0.26 and 0.27

This incompatible change was available since 0.25 and flipped to true by default in 0.27. Bazel 0.27 introduces some bug fixes in the behavior of the autodetecting toolchain, better diagnostic messages, and the non-strict toolchain.

Motivation

For background on toolchains, see here.

Previously, the Python runtime (i.e., the interpreter used to execute py_binary and py_test targets) could only be controlled globally, and required passing flags like --python_top to the bazel invocation. This is out-of-step with our ambitions for flagless builds and remote-execution-friendly toolchains. Using the toolchain mechanism means that each Python target can automatically select an appropriate runtime based on what target platform it is being built for.

Change

Enabling this flag triggers the following changes.

  1. Executable Python targets will retrieve their runtime from the new Python toolchain.

  2. It is forbidden to set any of the legacy flags --python_top, --python2_path, or --python3_path. Note that the last two of those are already no-ops. It is also strongly discouraged to set --python_path, but this flag will be removed in a later cleanup due to #7901.

  3. The python_version attribute of the py_runtime rule becomes mandatory. It must be either "PY2" or "PY3", indicating which kind of runtime it is describing.

For builds that rely on a Python interpreter installed on the system, it is recommended that users (or platform rule authors) ensure that each platform has an appropriate Python toolchain definition.

If no Python toolchain is explicitly registered, on non-Windows platforms there is a new default toolchain that automatically detects and executes an interpreter (of the appropriate version) from PATH. This resolves longstanding issue #4815. A Windows version of this toolchain will come later (#7844).

Migration

See the above FAQ for common issues with the autodetecting toolchain.

If you were relying on --python_top, and you want your whole build to continue to use the py_runtime you were pointing it to, you just need to follow the steps below to define a py_runtime_pair and toolchain, and register this toolchain in your workspace. So long as you don't add any platform constraints that would prevent your toolchain from matching, it will take precedence over the default toolchain described above.

If you were relying on --python_path, and you want your whole build to use the interpreter located at the absolute path you were passing in this flag, the steps are the same, except you also have to define a new py_runtime with the interpreter_path attribute set to that path.

Otherwise, if you were only relying on the default behavior that resolved python from PATH, just enjoy the new default behavior, which is:

  1. First try python2 or python3 (depending on the target's version)
  2. Then fall back on python if not found
  3. Fail-fast if the interpreter that is found doesn't match the target's major Python version (PY2 or PY3), as per the python -V flag.

On Windows the default behavior is currently unchanged (#7844).

Example toolchain definition

# In your BUILD file...
load("@bazel_tools//tools/python/toolchain.bzl", "py_runtime_pair")
py_runtime(
    name = "my_py2_runtime",
    interpreter_path = "/system/python2",
    python_version = "PY2",
)
py_runtime(
    name = "my_py3_runtime",
    interpreter_path = "/system/python3",
    python_version = "PY3",
)
py_runtime_pair(
    name = "my_py_runtime_pair",
    py2_runtime = ":my_py2_runtime",
    py3_runtime = ":my_py3_runtime",
)
toolchain(
    name = "my_toolchain",
    target_compatible_with = [...],  # optional platform constraints
    toolchain = ":my_py_runtime_pair",
    toolchain_type = "@bazel_tools//tools/python:toolchain_type",
)
# In your WORKSPACE...
register_toolchains("//my_pkg:my_toolchain")

Of course, you can define and register many different toolchains and use platform constraints to restrict them to appropriate target platforms. It is recommended to use the constraint settings @bazel_tools//tools/python:py2_interpreter_path and [...]:py3_interpreter_path as the namespaces for constraints about where a platform's Python interpreters are located.

The new toolchain-related rules and default toolchain are implemented in Starlark under @bazel_tools. Their source code and documentation strings can be read here.

breaking-change-0.27 incompatible-change migration-0.25 migration-0.26 team-Rules-Python

Most helpful comment

Documentation here has a small typo. Currently says:
load("@bazel_tools//tools/python/toolchain.bzl", "py_runtime_pair")
when it should be
load("@bazel_tools//tools/python:toolchain.bzl", "py_runtime_pair")
Note /toolchain.bzl vs :toolchain.bzl

All 37 comments

I'm trying this flag for rules_k8s in the context of https://github.com/bazelbuild/rules_k8s/issues/305.
If I run a build with an explicit --python_top my build / test succeeds (https://github.com/bazelbuild/rules_k8s/pull/306)
However if I use the new --incompatible_use_python_toolchains I get the following error:

ERROR: .../src/rules_k8s/fork/k8s/BUILD:58:1: Building par file //k8s:resolver.par failed (Exit 1) compiler.par failed: error executing command bazel-out/host/bin/external/subpar/compiler/compiler.par --manifest_file bazel-out/k8-fastbuild/bin/k8s/resolver.par_SOURCES --outputpar bazel-out/k8-fastbuild/bin/k8s/resolver.par --stub_file ... (remaining 4 argument(s) skipped)

Use --sandbox_debug to see verbose messages from the sandbox
src/main/tools/linux-sandbox-pid1.cc:427: "execvp(bazel-out/host/bin/external/subpar/compiler/compiler.par, 0x1830290)": No such file or directory

Note I did not experience this type of error when trying this flag out with rules_docker (https://github.com/bazelbuild/rules_docker/pull/787)

See also https://github.com/bazelbuild/rules_k8s/issues/305 for context

Unfortunately this broke a large number of downstream projects, mostly because it actually enforces that the Python interpreter has the requested version. Looks like switching the default Python version in Bazel 0.25 from PY2 to PY3 was too easy precisely because it wasn't being enforced at execution time. :(

We'll need to do some downstream fixing before we can flip again.

I'm trying this flag for rules_k8s in the context of bazelbuild/rules_k8s#305.
If I run a build with an explicit --python_top my build / test succeeds (bazelbuild/rules_k8s#306)
However if I use the new --incompatible_use_python_toolchains I get the following error:

ERROR: .../src/rules_k8s/fork/k8s/BUILD:58:1: Building par file //k8s:resolver.par failed (Exit 1) compiler.par failed: error executing command bazel-out/host/bin/external/subpar/compiler/compiler.par --manifest_file bazel-out/k8-fastbuild/bin/k8s/resolver.par_SOURCES --outputpar bazel-out/k8-fastbuild/bin/k8s/resolver.par --stub_file ... (remaining 4 argument(s) skipped)

Use --sandbox_debug to see verbose messages from the sandbox
src/main/tools/linux-sandbox-pid1.cc:427: "execvp(bazel-out/host/bin/external/subpar/compiler/compiler.par, 0x1830290)": No such file or directory

Note I did not experience this type of error when trying this flag out with rules_docker (bazelbuild/rules_docker#787)

See also bazelbuild/rules_k8s#305 for context

FWIW I think this is https://github.com/google/subpar/issues/98

Documentation here has a small typo. Currently says:
load("@bazel_tools//tools/python/toolchain.bzl", "py_runtime_pair")
when it should be
load("@bazel_tools//tools/python:toolchain.bzl", "py_runtime_pair")
Note /toolchain.bzl vs :toolchain.bzl

Fixed in 331c84b41188549d57cac6a79474d10fbe14587c. :)

Downstream buildkite run here. Cataloging the failures:

  • Tests that run on mac workers fail because python3 is not on the PATH propagated to actions.

  • Android Testing failures are tracked here

  • Bazel's own failures are android-related so they're probably the same cause as the above.

  • The Bazel Toolchains failure is due to a PY2 host tool running as PY3.

  • The Cloud Robotics Core failure is also a host config issue.

  • Remote execution and Rules jvm external have some android issues.

  • rules_k8s has the same issue as the above host-config stuff with Bazel Toolchains

We're now targeting 0.27 for flipping this flag. In the absence of the execution transition feature, all projects that require PY2 host tools should set --host_force_python=PY2.

all projects that require PY2 host tools should set --host_force_python=PY2.

AFAICT, this includes all projects that use certain parts of rules_docker, eg container_push. It depends on a Python 2-only version of httplib2: https://github.com/bazelbuild/rules_docker/blob/6fc0137cae4936b67c3c05dde1f77c88f59379f6/repositories/repositories.bzl#L89

@nlopezgi If this is flipped, you may need to document rules_docker's requirement of --host_force_python=PY2.

Is there a way code can be compatible with either Python version, eg something like py2or3_library()? Otherwise I don't see how a build could turn off --host_force_python=PY2 without a big-bang upgrade of all Python dependencies.

thanks for looping me. Yes, we will need to update docs to point to use of this flag. For now, we have updated the .bazelrc file in rules_docker (https://github.com/bazelbuild/rules_docker/blob/master/.bazelrc) to include this flag. Once 0.27 is I'll make sure to update docs to point to use of this flag as required.

It's unfortunately true that downstream projects will need to pass this flag as well. As for avoiding a big-bang change when turning off the flag, I think the answer to that will be to migrate away from the host configuration once the execution transition is ready.

It's unfortunately true that downstream projects will need to pass this flag as well.

I'm a little concerned that this is hard to discover. We were lucky to have the issue you filed on our repository to point us to the flag, but others users will see their build fail on 0.27 with something like:

ContainerPushDigest push_foo.digest failed (Exit 1) digester failed: error executing command bazel-out/host/bin/external/containerregistry/digester --config bazel-out/k8-fastbuild/bin/main.0.config --manifest bazel-out/k8-fastbuild/bin/main.0.manifest --digest ... (remaining 21 argument(s) skipped)                                  

Use --sandbox_debug to see verbose messages from the sandbox
Traceback (most recent call last):
  File ".../sandbox/linux-sandbox/14/execroot/__main__/bazel-out/host/bin/external/containerregistry/digester.runfiles/containerregistry/tools/image_digester_.py", line 28, in <module>
    from containerregistry.client.v2_2 import docker_image as v2_2_image
  File ".../sandbox/linux-sandbox/14/execroot/__main__/bazel-out/host/bin/external/containerregistry/digester.runfiles/containerregistry/client/__init__.py", line 23, in <module>
    from containerregistry.client import docker_creds_
  File ".../sandbox/linux-sandbox/14/execroot/__main__/bazel-out/host/bin/external/containerregistry/digester.runfiles/containerregistry/client/docker_creds_.py", line 31, in <module>
    import httplib2
  File ".../sandbox/linux-sandbox/14/execroot/__main__/bazel-out/host/bin/external/containerregistry/digester.runfiles/httplib2/__init__.py", line 28, in <module>
    import email.FeedParser
ModuleNotFoundError: No module named 'email.FeedParser'

If they're lucky, they'll connect "ContainerPushDigest" to rules_docker, check the docs and see the flag mentioned. I wouldn't have done that; I'd probably have tried to update rules_docker and found that it didn't help. I'd have tried to build the rules_docker repo and seen that it worked fine. I might have used bazelisk --migrate with the old bazel version to identify the flag, found this issue and discovered --host_force_python.

I don't have a great idea on how to improve this. Perhaps adding migration instructions to the release notes would help (do people read them?) or perhaps rules_docker could add an assertion that Python 3 is being used, instructing the user to add --host_force_python if not.

That's a very good point. I have an idea how we could make this experience nicer.

The py_binary rule can detect at analysis time when it is getting the wrong Python version as a consequence of being used in the host configuration. It can tell this by the fact that its python_version attribute won't match the version stored in the configuration state.

When this situation is detected, we have a few options.

  1. We can fail-fast with an error. But this means that we cannot support PY2 and PY3 py_binarys in the same build, even if the actual user Python code is compatible with both.

  2. We can emit a warning. This may be spammy, but it gives a very good chance that a user looking through a traceback will see the likely cause of their problem.

  3. We could also do either of the above and provide a way to silence the failure/warning on a per-target basis by setting an attribute a certain way, but this would be ugly to implement and eventually cleanup. It also wouldn't work in cases where the host tool is in an upstream repo.

  4. When this situation is detected, we carry on as normal but pass this information to the generated stub script. The stub script will then monitor the user Python code, and if it has a non-zero exit code, emit additional text explaining that the problem may be due to this issue. (This implies a slight performance penalty of changing an os.execv call to subprocess, but that code path would only be activated for host tools with this kind of version mismatch.)

I think option 4 is the most versatile. Note that any solution to this problem will be useful not just for the 0.27 migration, but also as long as we still have Python tools in the host configuration going forward, i.e. as long as the host configuration is still a thing.

Good ideas! 4 sounds good to me if the complexity is acceptable. The only downside I see is that it will confuse users who see the warning spuriously when something unrelated breaks their genrule.

If the stub can know its python_version, could it fall back to using the system interpreter when the python_version doesn't match? Combined with a warning, this could provide the smoothest transition path, although I can imagine that it has other consequences.

FYI, while I was reading up on python_version I noticed this comment, which you might want to adjust to reflect recent changes: https://github.com/bazelbuild/bazel/blob/36f1c0df806e47707f7dfc1c49c96f6f4eb34e6a/src/main/java/com/google/devtools/build/lib/bazel/rules/python/BazelPyRuleClasses.java#L202

We can emit a warning. This may be spammy, but it gives a very good chance that a user looking through a traceback will see the likely cause of their problem.

A fresh build of our project prints out 150 lines of warnings due to #7157 so we could well miss it. Printing the warning in the stub would make it more noticeable.

I noticed this comment, which you might want to adjust

Thanks, but unfortunately #4815 is still relevant until this flag is flipped and until we also have an autodetecting Python toolchain on Windows.

A fresh build of our project prints out 150 lines of warnings due to #7157 so we could well miss it.

A good argument to do it in the stub script then. Note that since we'll emit it only after the user code has dumped its stack trace, the relevant message should be closer to where the user is looking in the logs.

FYI for anyone depending on subpar, the 2.0.0 release makes it compatible with the default Python toolchain.

I think option 4 is the most versatile. Note that any solution to this problem will be useful not just for the 0.27 migration, but also as long as we still have Python tools in the host configuration going forward, i.e. as long as the host configuration is still a thing.

This option sgtm too. Please let me know how this proceeds so I can document as best as possible in rules_docker docs.
Also, is implementing any of these alternatives considered blocking for 0.27.0?

I believe I can implement 4, which should supersede the other choices, and do it in time for 0.27.

Great! thanks!

Here are a few other ideas to help make this migration smoother:

  • Provide a recipe for a toolchain that parses the shebang of the main .py file, if it exists, and delegates to that interpreter. Attempt to ensure this toolchain only matches the host platform.

  • Add a hack to the py_binary rule to select (from the toolchain) the runtime whose version matches the value of python_version, rather than the value of the version in the configuration.

Both of these amount to pretty much the same thing: Detect when we're in the host config and getting the wrong version, and pick the right version interpreter in spite of the configuration. But since these workarounds don't actually change the configuration state, any select()s in the Python target or its transitive dependencies will see the wrong Python version, and any srcs_version constraints that require the correct version may cause the build to fail.

I'll note that at no point so far has Bazel ever supported multiple Python versions in the host configuration. So these workarounds would add expressivity that was not there before. The breakages caused by the toolchain flag occur not because you need both versions, but because you need just one, and you're not getting that version anymore because #4815 is being fixed. So I think --host_force_python + better diagnostics should be sufficient for the purposes of flipping this flag.

If the stub can know its python_version, could it fall back to using the system interpreter when the python_version doesn't match?

Sorry @drigz, just realized you had suggested a variant of what I later described above. Yes, that's similar to the "choose the right runtime from the toolchain in spite of the configuration" approach. I'm not sure whether that's better or worse than adding a warning upon failure.

I guess I'm a bit concerned that it makes it possible for the build to lumber along in a partially broken state. Sure you get the right interpreter, but your select()s (if you use any) will see the wrong version, and if someone adds a correct PY[2|3]ONLY constraint to the srcs_version of a transitive dependency, you break.

So maybe I'd rather force the user to set --host_force_python, since that also sets the configuration to be consistent with what's actually run.

No problem. The warning seems reasonable to me.

your select()s (if you use any) will see the wrong version

Do you have an example of a select() for Python version? IIUC this would let rules_docker use the appropriate version of httplib2 depending on the value of --host_force_python.

Documentation is in-line with the source here. You can basically just select on the config settings @bazel_tools//tools/python:PY2 and :PY3. (No need for a default case since those two options cover all possibilities.) If you need a select branch with more conjunctive conditions you can define your own config setting using the :python_version target.

Thanks! I tried to make rules_docker compatible with both, could you have a look? It was a bit harder than I expected but it should unblock the Py3 migration for some users. https://github.com/bazelbuild/rules_docker/pull/843

I experienced an issue using this flag on macOS https://github.com/bazelbuild/bazel/issues/8414 that I submitted a fix for https://github.com/bazelbuild/bazel/pull/8415

Aiming to flip at head imminently. Here's the latest downstream presubmit at head with the flag flip. As a control, here's a run at the same commit without the flag flip.

  • [x] Android testing is already failing with a different error, but it looks like this flag flip still triggers a Python version related problem, already tracked in googlesamples/android-testing#266.

  • [x] Bazel bench fails. Filed bazelbuild/bazel-bench#29.

  • [x] Bazel toolchains is still failing, already tracked in bazelbuild/bazel-toolchains#501.

  • [x] Cloud robotics is still failing, tracked in googlecloudrobotics/core#9.

  • [x] Tensorflow has failures. Filed tensorflow/tensorflow#29220. Should be fixed by adding the bazelrc flag but hard to say since CI is broken for other reasons.

  • [x] rules_apple breaks. Filed bazelbuild/rules_apple#456. Fix is known (update project's .bazelci).

  • [x] One of the rules_jvm_external pipelines is already failing for a different error but then fails for this flag flip on top. Filed bazelbuild/rules_jvm_external#157.

  • [x] Both rules_nodejs and gerrit break in CI due to how --incompatible_strict_action_env interacts with the autodetecting toolchain and our mac worker environment. Tracked in bazelbuild/rules_nodejs#809 and https://bugs.chromium.org/p/gerrit/issues/detail?id=10953, and in #8536.

  • [x] Our own remote execution pipeline fails. Filed #8538.

  • [x] Tulsi is affected by the rules_apple breakage above. Filed bazelbuild/tulsi#94

Update:

  • Tensorflow is still not running in CI, so can't reproduce.

  • rules_apple may be fixed in CI, not confirmed yet, but downstream projects shouldn't be blocked

  • rules_nodejs should have a simple fix available

  • Can't repro the gerrit failure due to other CI failures

Can't repro the gerrit failure due to other CI failures

The other CI failures have been fixed in: https://gerrit-review.googlesource.com/c/gerrit/+/227252. This change was merged up to master.

The root cause of the gerrit issue is tracked in #8536. rules_apple is fixed. rules_nodejs is probably fixed but waiting on tomorrow's downstream CI run to confirm. That just leaves tensorflow, which is still disabled, and which I assume we're not blocked on.

Hey @brandjon,
did I understand correctly that For targets that are built in the host configuration it is impossible to have different python versions? Any ideas if that will be implemented in future?

Right. There's only one host configuration globally, and once you enter the host config, none of your dependencies can ever leave. So everything built in the host config shares the same Python version (set by --host_force_python).

The plan is to deprecate the host config and replace it with the "exec" config, which serves a similar purpose but behaves like an ordinary configuration state in which Python versions and other things work normally. The exec config is already available for starlark rules (cfg = "exec" instead of cfg = "host"), and we're rolling it out to genrules via a new exec_tools attribute that you can migrate dependences to from the tools attribute. Eventually exec_tools will effectively turn back into tools. See also #6443.

Thanks for the clarification. Unfortunately for some reason --host_force_python doesn't work in our project setup :( and also it wouldn't work as we need different versions for different tools

@brandjon I'm trying to figure out how to integrate my cython build rules with the new Python toolchain. At the moment, gRPC's Cython build very closely mirrors tensorflow's. We use a repository rule to determine the location of the Python header files, so we can use them to compile our C extension. My hope was that I could get the toolchain injected into the repository rule, determine the path to the headers by invoking the supplied interpreter, and add a symlinked rule for it. However, unlike normal rules, repository rules don't have access to toolchains. How would you recommend we get access to Python headers using the new toolchain system?

Context: https://github.com/grpc/grpc/pull/19462

The new Python-toolchain system doesn't support conveying header information any more than the old --python_top/--python_path system did. You get around this by grabbing header information in a repo rule. Presumably the problem is that the toolchain change made it so the interpreter used at execution time is no longer the same one discovered by the repo rule.

Assuming that you didn't register any Python toolchains in your build, the default behavior at execution time is still to do a lookup in PATH to find the interpreter. The change is that now it cares whether the target is PY2 or PY3, and looks for python2 or python3 accordingly instead of just python. I suspect what's happening is that your repo rule locates python2 headers but your targets are declared (perhaps implicitly since it's the default) as PY3.

You might change your targets to be PY2 using the python_version = "PY2" attribute and if necessary --host_force_python=PY2. Alternatively, you can augment your repo rule to find headers for python3 and ensure your targets select the appropriate dependency based on their own version. (select() can use @bazel_tools//tools/python:PY[2|3] to tell what the version is.)

@brandjon Thank you for the reply!

Presumably the problem is that the toolchain change made it so the interpreter used at execution time is no longer the same one discovered by the repo rule.

Actually, the opposite. The Python 2 tests failed because the C extension was being compiled against Python 3 headers.

Alternatively, you can augment your repo rule to find headers for python3 and ensure your targets select the appropriate dependency based on their own version.

This solution sounds perfect.

Actually, the opposite. The Python 2 tests failed because the C extension was being compiled against Python 3 headers.

Is it possible those were always Python 3 headers, and your Python 2 tests were actually being run with Python 3? (This would probably only happen if the python command is a Python 3 interpreter on your system.)

I'm just not sure that anything that changed with toolchains could cause a change to the behavior of the repo rule.

I'm currently facing some issues with this on a VSCode Golang Devcontainer that has python2 and 3 installed.
When I'm trying to run a container_push job it will complain about the python version.

However, what is not clear to me is how to use the toolchain definition. Do I have to have one of those for each BUILD file or it's just a "generic" BUILD that has the toolchain definition to be used by later jobs?

Is this documented on the Bazel docs? I'm not finding it, at least not specifically for handling this issue with python versions...

General toolchain documentation is here. For the Python-specific part of it, see the top comment and the (recently moved to Stardoc) py_runtime_pair documentation.

You register a toolchain for your entire build in either your WORKSPACE file or via the --extra_toolchains command line argument. Any given target picks a toolchain from the set of registered toolchains depending on the platform constraints. In the case where you register a Python toolchain with no constraints, it'll take precedence over the default toolchain and be used for every target in your build. (Note that a single Python toolchain specifies information about both the Python 2 interpreter and Python 3 interpreter.)

Was this page helpful?
0 / 5 - 0 ratings