Cargo: Make commands in dev-dependencies available to run

Created on 10 Jan 2016  Â·  15Comments  Â·  Source: rust-lang/cargo

I have a rust project which uses a few shell scripts that need commands from other crates. I added them as dev-dependencies, but I couldn't find the binaries anywhere. It would be nice if the binaries were available to run from the command line.

As an example, npm makes binaries available in node_modules/.bin, so you can put them in devDependencies and use them in scripts.

A-build-scripts C-feature-request

Most helpful comment

Are there any workarounds for this right now? I was getting flaky builds due to the problem of having to use cargo install mdbook on cached build servers. I'm now adding a hacky script as a workaround in brendanzab/pikelet#20. I have to use a hard version number though, otherwise if mdbook updates, I will always be forcing the rebuild. What I'd love is just to be able to add mdbook to my dependencies:

[dev-dependencies]
mdbook="0.1.5"

And then somehow be able to call that specific binary.

All 15 comments

This sounds like it may be more of a use case for cargo install? We typically don't have binaries from any dependencies available to the main application

The issues that I can see with cargo install for this use case are:

  1. No record of dependencies. Let's say your project needs various binary crates for development, for example because they're called by non-rust glue and utility scripts. This creates the need to record somewhere that you'll need to cargo install xyz, which complicates getting started on a project, or starting work on it on a new machine.
  2. No versioning. Although you can use cargo install --vers, dependency on a specific version of a crate isn't recorded.
  3. Global install by default. Since cargo install installs globally by default, it isn't possible out of the box to keep separate installations of tools for different projects, in case they depend on different versions of the same binary crate. It's possible to get around this by doing cargo install --root or using the install.root configuration key and specifying a local location, but I suspect that this is a common enough use-case to warrant special handling.

My current motivation for raising this issue is that I'm developing bindings for a C-library using github.com/crabtw/rust-bindgen. rust-bindgen has a library crate which provides a rust api, as well as a binary crate which provides a command line API. I need to use the binary crate because the library API won't work for my use case. So, rust-bindgen needs to be installed and available as a runnable binary for doing routine development, and it would be nice to formally record that somewhere, and have a simple 'cargo update' or 'cargo install' that would fetch crates with the commands that I need.

For that use case, wouldn't a build script suffice? In theory rust-bindgen would also have a library API and then your build script would take care of what while building.

Rust-bindgen does indeed have a library API, but it is not exactly the same as the CLI. Not to get into too much detail, but the library API is a macro that generates bindings at compile time, and the CLI writes the bindings to a .rs file. I'm wrapping a library with safe rust bindings, so writing the bindings explictly to a .rs file where I'll see in git if they've changed and can review the changes before committing them is preferable.

I suspect that many crates are like this. The library api and the CLI are slightly different use cases, and so will be designed slightly differently and be good for different things. Or, the library will be a command-line oriented tool with no library API, but still used in the development of a particular rust project.

Unless I'm mistaken, this was accidentally closed.

Oh dear I did indeed fat finger that, sorry!

I have a use case that also requires commands available from dev-dependencies.

My project has to run arbitrary user script/commands (sort of like codingame), those scripts are run as a child process by the library, which pipes and interact with stdin, stdout and stderr of the child. I want to test that, and the most sensible way I thought would be having in-project dev-dependencies with executables to act as user scripts. But then I found out that cargo does not compile dev-depenencies commands.

Although somewhat different than the original issue this also involves having binaries from dev-dependencies compiled. I'm open to alternatives, my strongest motivation for having this done in cargo is the continuous integration testing of the child process piping functionality on multiple platforms.

Is there a strong reasoning to oppose binaries in dependencies in general and in dev-dependencies in particular?

Edit: fiddle a bit and found #1581, which could also be used to compile my test user scripts.

There's not really any particularly strong reason to _not_ do this, it basically just needs a principled design. For example we can't really "just compile" all dev-dependency binaries and put them somewhere. Some concerns I have with that are:

  • What if I don't actually need all those binaries? Will this just clutter up build times?
  • When are those binaries actually compiled? Does cargo build do that? cargo test?
  • How do we resolve binary name conflicts between dependencies?

In general build scripts tend to solve most of these problems naturally, so I personally like to push on those as hard as possible, but I can definitely see that for CLI tools this may fall down from time to time (but cargo install can often fill in)

In my case I miss the ability to compile binaries for use inside tests, not running commands from dev-dependencies outside tests.

The more I think about it dev-build scripts seem more reasonable as in the general case it could be used to generate anything for tests and benches.

For now I'll resort to a Makefile + PS script (windows) for compiling a binary and making it available to tests via some environment variable. But I'm willing to write my first RFC for #1581.

Out of curiosity, isn't it easier to link to a library and call a function than it is to execute a binary? Shouldn't the dev-dependencies ship libraries that you can call in tests?

I do have a feeling though, yeah, that dev build scripts will likely be a thing at some point.

Well, the point is to specifically test integration on running a child
process, I guess in theory I could fork and set up pipes and call
functions, but I don't think it's in std, cross-platform, nor could use
the interface that would be tested.

On Fri, Feb 26, 2016, 03:25 Alex Crichton [email protected] wrote:

Out of curiosity, isn't it easier to link to a library and call a function
than it is to execute a binary? Shouldn't the dev-dependencies ship
libraries that you can call in tests?

I do have a feeling though, yeah, that dev build scripts will likely be a
thing at some point.

—
Reply to this email directly or view it on GitHub
https://github.com/rust-lang/cargo/issues/2267#issuecomment-189129446.

Are there any workarounds for this right now? I was getting flaky builds due to the problem of having to use cargo install mdbook on cached build servers. I'm now adding a hacky script as a workaround in brendanzab/pikelet#20. I have to use a hard version number though, otherwise if mdbook updates, I will always be forcing the rebuild. What I'd love is just to be able to add mdbook to my dependencies:

[dev-dependencies]
mdbook="0.1.5"

And then somehow be able to call that specific binary.

Perhaps instead of having separate main dependencies and devDependencies, cargo should make it possible to specify a list of environments for each dependency?

Same as features: by default the crate would only be available at runtime, but specifying "dev"/"bin"/"ci" would mean that the dependency should be installed only during development/should be installed with binaries/should be installed in ci pipelines

Just want to chime in that this is also a problem for Deno. We have a rust http server that we test Deno programs against. It needs to be a standalone binary so we can use it outside of “cargo test” for debugging and whatnot. But the binary is a dependency of the tests as well. There is no way to express this dependency in cargo currently (as far as I can tell).

I am trying to write a database system in Rust and would love to have this feature too. It's a distributed system where I need to run a couple of processes for integration testing.

Currently I have build script that re-runs cargo install whenever one of the binary projects (node and coordinator) changes. This is obviously ugly and can mess with other projects on the system, but see my code below for the somewhat expected behaviour of that feature.

use std::process::Command;

fn visit_dirs(dir: std::path::PathBuf) {
    println!("cargo:rerun-if-changed={}", dir.as_os_str().to_str().unwrap());

    for entry in std::fs::read_dir(dir).unwrap() {
        let entry = entry.unwrap();
        let path = entry.path();

        if path.is_dir() {
            visit_dirs(path);
        } else {
            println!("cargo:rerun-if-changed={}", path.as_os_str().to_str().unwrap());
        }
    }
}

fn main() {
    let mdir = std::env::var("CARGO_MANIFEST_DIR").unwrap();
    let path = std::path::Path::new(&mdir).join("..");

    Command::new("cargo").args(&["install", "--path=coordinator"])
        .current_dir(path.clone())
        .spawn().unwrap().wait().unwrap();
    Command::new("cargo").args(&["install", "--path=node"])
        .current_dir(path.clone())
        .spawn().unwrap().wait().unwrap();

    visit_dirs(path.join("coordinator"));
    visit_dirs(path.join("node"));

    println!("cargo:rerun-if-changed=build.rs");
}
Was this page helpful?
0 / 5 - 0 ratings