Skip to content

Instantly share code, notes, and snippets.

@rainersigwald
Last active August 9, 2017 20:47
Show Gist options
  • Save rainersigwald/fdf2be017223bf90850abf06ffa32169 to your computer and use it in GitHub Desktop.
Save rainersigwald/fdf2be017223bf90850abf06ffa32169 to your computer and use it in GitHub Desktop.

Negotiating Multitargeted References

With .NET Core SDK 1.0, we introduced multitargeted projects that define multiple TargetFrameworks. These projects produce multiple outputs, each compatible with a different set of frameworks.

A project that references a multitargeted project wants to refer to a specific output, but users shouldn’t have to manually manage which one—just like referring to a NuGet package, they should get the best match for the referencing project's TargetFramework.

This must be done both for full builds (that produce output) and for design-time builds (that provide information to the IDE).

This is conceptually easy: for each reference, find the best match from its available TargetFrameworks and refer to that.

Why This Isn't Trivial

Determining Compatibility

There's no universally-available TargetFramework? BestMatch(TargetFramework desired, TargetFramework[] options) method.

The existing compatibility matrix is embedded in NuGet binaries. That has several downsides:

  • Updating the matrix to account for new compatibility levels or implementations requires taking a binary update of NuGet (which cascades to VS and the CLI).
  • The compatibility check can only be done in an environment that already requires NuGet. The .NET Core SDK does, but existing/legacy projects cannot assume the availability of NuGet; it's an optional feature of VS.

MSBuild Constraints

The MSBuild programming language has some properties that make this kind of negotiation difficult.

Each project is isolated, and the only way to get information from another project is to build a target in the remote project. Building that project requires evaluating it (computing the value of all statically-defined items and properties) and executing the desired target (and all targets depended on by that target).

Evaluating a project takes measurable time, especially now that SDK projects use globbed includes (so evaluating a project must grovel the filesystem to determine what items exist).

Executing targets can range from fast to slow. For the purposes of this discussion, it's fast to run the “what TargetFramework is most compatible with X” target.

MSBuild doesn't have a concept analagous to a function call with arguments. To get that effect, you must execute a target in a project with any desired arguments passed as new global properties. A project must be reevaluated from scratch for every unique set of global properties. That means that evaluation time will triple if you want to “call a function” with 3 different arguments.

Negotiation Error Requirements

There are open questions about what the experience should be if a user has a ProjectReference to a project with only incompatible TFs.

  • NuGet emits errors for mismatches at restore time. Is that sufficient?
  • Does the error even need to exist?
    • Problems should be discoverable at runtime. But of course that's harder to diagnose.
    • What about planned scenarios like referencing a net46 project from a netstandard2.0 one?
  • Is the current error good enough?

Current approach

For VS2017 15.0/.NET SDK 1.0 through 15.3/SDK 2.0, this problem is solved by by pushing the compatibility computation to the referenced project.

For all of these examples, let's think about a single-targeted project P that refers to a multitargeted project M and a single-targeted project S.

When resolving P's references:

  1. Ask M what the best match is for P's TF. It responds with one of its options. [Evaluates M(referringTF=PTF)]
  2. Ask S what the best match is for P's TF. It responds with its only option. [Evaluates S(referringTF=PTF)]
  3. Annotate the references for M and S in P with the desired target metadata.
  4. Get the output for the appropriate TFs from M and S. [Evaluates M(TF=x) and S()]

This requires a minimum of two evaluations of each reference: one for the best-match query (with a global property of the requesting TF), and one for the inner build with the right TF (or with global properties explicitly unset to get “the only TF”). If there are other projects with different TFs that reference M, there will be even more evaluations.

That's true even when everything single-targets, because there's no way to know from the referencing project that the referenced project multitargets; it must ask.

That means that evaluation time for all ProjectReferences is doubled (or more), even for projects that have never multi-targeted. This is a performance problem.

Another problem with this approach is that the error messages for incompatible references don't mention both projects involved, only the referenced project. It can be difficult to determine which project has a bad set of TFs.

Solution configurations

Problems like this one have existed for a time for the Configuration and Platform dimensions of a project. The solution taken long ago was to elevate the concepts and the mapping between projects to the solution level.

A solution has Solution Configurations, which specify what Project Configurations are active within it. At reference-resolution time, MSBuild targets consult the Solution Configuration to determine what properties to pass to a given reference. This makes it possible to specify "Even when the referencing project is built for Debug, this reference should always be Release" or "This project is always AnyCPU even when building for x86".

Visual Studio is aware of solution configurations and provides UI to manipulate them.

We decided not to go this way for TargetFramework and RuntimeIdentifier, because

  • It requires having a solution.
  • It requires changing the solution format.
  • It requires teaching Visual Studio about the new dimensions.

Implementation options

Flows described using the same example projects from above.

Query for Multitargeting

This is the approach I was thinking of for 15.3.

When resolving P's references:

  1. Ask M and S if they multitarget. [Evaluates M() and S()]
  2. Ask M what the best match is for P's TF. It responds with its only option. [Evaluates M(referringTF=PTF)]
  3. Annotate the reference for M in P with the desired target metadata.
  4. Get the output for the given TF from M. [Evaluates M(TF=x) but S() was already done]

Pros:

  • Pay-for-play cost: single-targeted projects evaluate only once.
  • Scoped change to the current protocol.

Cons:

  • Requires triple evaluation for multi-targeted projects.
  • Lose TF-compatibility check for single-targeted references.

Query for TFs; Choose in Consumer

This is similar to the above, but avoids one evaluation for multi-targeted projects by pushing the best-match choice to the consumer.

When resolving P's references:

  1. Ask M and S what TFs they support. [Evaluates M() and S()]
  2. Determine the best match TF for M, and that S's only TF is fine.
  3. Annotate the reference for M in P with the desired target metadata.
  4. Get the output for the given TF from M and S. [Evaluates M(TF=x) but S() was already done]

Pros:

  • Pay-for-play cost: single-targeted projects evaluate only once.
  • Error messages for TF match failures are better when produced from the referencing project (Project P: ERROR : Cannot find a target framework compatible with {0} in project "{1}" which targets {2}.).

Cons:

  • Requires double evaluation for multi-targeted projects.
  • Requires compat matrix in all project types.

Build Everything; Select at Consumption-Time

An eagerly-evaluated version of the above.

When resolving P's references:

  1. Ask M and S for all their outputs. [Evaluates M() and S()]
  2. M invokes its inner builds. [Evaluates M(TF=x), M(TF=y)]
  3. M's outer build passes the result of all of its inner builds back to P.
  4. Determine the best match TF for M and remove non-matches from reference list.

Pros:

  • Same good error messages as the choose-TF-and-call-back approach.
  • Enables some scenarios where people expect calling a target on a project's outer build to return the union of its inner build outputs for the same target.

Cons:

  • Complexity in consumption of references (to filter to only the right TFs).
  • Does more work than necessary.

Precompute TF

The logic for selecting the output of a built project is the same logic that's required to figure out which assets from a NuGet package get referenced. For NuGet, that computation time is done at restore time, and the results are cached and read at project build time.

The NuGet restore process could be extended to cache information about TFs for ProjectReferences, and annotate the references with the cached information at build time.

Pros:

  • Optimal evaluation; never touches the outer build of references.
  • Flow of information easier to understand.
  • Compat calculations are peformed only once.

Cons:

  • Increased complexity of NuGet assets file.
  • Requires asset-file understanding in all projects.
@nguerrera
Copy link

Problems like this one have existed for a time for the Configuration and Platform dimensions of a project.

The TF dimension is actually significantly different from the others. Consider the set of projects in a solution as nodes in a graph with P2P references being the edges. For any given (Sln Configuration, Sln Platform), each node can be assigned a (Project Configuration, Project Platform) in the solution configuration manager. However, in the TF case, it is the edges that need labeling, not the nodes.

For example, the following is an entirely realistic graph.

Nodes:

  • A (netcoreapp 1.0)
  • B (net46)
  • C (net45, netstandard1.3)

Edges:

  • A -> C (netstandard1.3)
  • B -> C (net45)

A target framework drop-down in the row for C in solution configuration manager would not be able to express that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment