Skip to content

Instantly share code, notes, and snippets.

@wycats
Last active August 29, 2015 13:57
Show Gist options
  • Save wycats/9678784 to your computer and use it in GitHub Desktop.
Save wycats/9678784 to your computer and use it in GitHub Desktop.
Cargo Status Update - Week of March 17, 2014

Our major goal this week was cleanup: we're planning on moving our current work to the rust-lang repository in the next week or two, so cleanup was the order of the week.

Command Structure

Because Cargo uses the same design as git (many plumbing commands that are used by a smaller number of high-level porcelain commands), getting a standard way to write commands with limited boilerplate was a high priority to get done before we wrote too many commands. We wrote the first few commands by hand, then extracted out some useful abstractions.

A few weeks ago, I wrote a library named Hammer.rs that allows you to decode command-line flags into a struct.

extern crate hammer;
extern crate serialize;

use hammer::{FlagDecoder,FlagConfig};
use serialize::Decodable;

#[deriving(Decodable)]
struct MyFlags {
    color: bool, // --color
    manifest_path: ~str, // --manifest-path foo (required)
    count: Option<uint> // --count 12 (optional)
}

// configuration of things like short aliases goes here
impl FlagConfig for MyFlags {}

fn main() {
    let mut decoder = FlagDecoder::new<MyFlags>(std::os::args().tail());
    let flags: MyFlags = Decodable::decode(&mut decoder);

    // decoder.error may contain an error such as:
    //     `--manifest-path is required`
}

The idea is that the bulk of the implementation of command-line tools should be working with typed structs, containing the failure to the point of deserialization, rather than working with an API that spreads out the error handling across the implementation of the command. For what it's worth, Decodable is a very nice way of isolating the errors that are natural with user-supplied data (it would be great if the interface used Result more explicitly).

Communication between commands is done via JSON, which also lends itself to serialization and deserialization at the command boundary, with the implementation of the command itself working with typechecked objects.

Today, I finished up an abstraction that allows a command to specify a struct to deserialize flags into, a struct to deserialize stdin into, and the expected return value type (a Result).

The nice thing about this abstraction is that it allows us to isolate all of the code that manages the cross-process communication, as well as printing nice error messages to the user, and let the implementation of a command focus on doing its task.

Here's an example of how that works in a real command: https://github.com/carlhuda/cargo/blob/master/src/bin/cargo-read-manifest.rs#L32-L57

The TL;DR is:

extern crate hammer;
extern crate cargo;

struct Flags {
    // the structure to deserialize flags into
}

struct Input {
    // the structure to deserialize stdin into.
    // in cargo, many commands share structures for this purpose
    // (for example, the cargo-read-manifest command emits a
    // serialized cargo::Manifest and the cargo-rustc command
    // consumes a cargo::Manifest over stdin)
}

struct Output {
    // the structure that this command will return, to be serialized
    // into JSON.
}

fn main() {
    cargo::execute_main::<Flags, Input, Output>(execute);
}

fn execute(flags: Flags, input: Input) -> CargoResult<Output> {
    // do some work with flags and input, including calls to try!
    // to other Cargo library functions.

    let val = try!(possible_io_err()
        .to_cargo_result(~"print this on the console in red", 17))

    // do more work

    Some(Output{ ... })
}

Again, the execute function itself is working with typed objects, and has an idiomatic way to report errors to the console through Rust Results without having to think about the nitty-gritty details. Note: we have a slightly different plan for commands that want to stream output as opposed to commands that plan to be piped into other commands.

Another nice thing about this structure is that each command can also be used as a library with no additional work. Simply make whatever structs you need, and call the execute function, expecting a CargoResult in response.

Testing

Both Carl and I are big believers in testing early and often, and we spent a big part of this week getting our testing infrastructure in place.

Carl started porting the Hamcrest assertion library to Rust, and we wrote some infrastructure to make it easy to build projects inside of tests, and then shell out to commands and write assertions about the results. From this point further, all new functionality will be tested both through in-file unit tests and end-to-end acceptance tests.

Commands So Far

So far, we have three commands, which we've used to flesh out the requirements for commands in general:

  • cargo-read-manifest: a plumbing command that takes a manifest file and spits out normalized information about the project (such as crate sources, output locations, and eventually dependency information).
  • cargo-rustc: a plumbing command that takes the normalized output from cargo-read-manifest and compiles the source crate(s) into the specified output location.
  • cargo-compile: a porcelain command that (at the moment) calls cargo-read-manifest and pipes it into cargo-rustc. It will eventually be responsible for managing the default workflow, including fetching normalized configuration and ensuring dependencies are up to date.

We expect that cargo-read-manifest will be broadly useful for Cargo itself and external tools that want a way to work with the normalized manifest, once defaults and configuration have been applied. We also plan to add a --locate flag to cargo-read-manifest, which will search up the directory hierarchy for a Cargo.toml. The normalized manifest output always includes the full path to the manifest, so this will also be a nice extension point for other tools.

The next major area, dependencies and configuration, will produce some more commands. Speaking of which...

Dependencies and Configuration are Next!

Now that we've gotten the basic infrastructure in place, the plan for what's next is to implement dependencies and configuration. Eventually, we plan to support–out of the box–dependencies that come from the local file system, git repositories, and a central package repository. Dependency resolution will take all of those kinds of dependencies into consideration, and future plugins may be able to add additional kinds of packages.

For the immediate present (so we can continue to iterate on working code), we plan to add a dependency feature to the manifest, and allow the user to configure a local directory where the dependency is located. This is emphatically not the workflow we envision for Cargo even in the near future, but I wanted to call it out for people who follow along so that the immediate implementation strategy doesn't confuse.

We expect that the normal usage pattern of Cargo will be to use dependencies provided by the package repository, falling back to git packages for unpublished work or in order to fix a critical bug in published packages. We plan to make swapping from a published package to a git package with the same name seamless, so that the bugfix scenario is well represented.

Thanks

Thanks for the warm welcome we got, and all the enthusiasm around the project. As production users of Rust at Tilde, we have a personal stake in getting this right. We greatly desire feedback, especially once we start shipping software that we feel is ready for fellow Rust developers to use in their day-to-day workflows.

@valpackett
Copy link

Well, if you use text streams, you get user extensibility for free, no complex plugin architecture needed :-)

@esummers
Copy link

olivier-renaud: My guess is that is for use by distributions like MacPorts to insert stuff in the middle of tasks without having to patch configuration files. It probably helps with functional testing. I'm sure it is also trivial since the tasks align with crates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment