Skip to content

Instantly share code, notes, and snippets.

@jsquire
Last active July 22, 2025 16:49
Show Gist options
  • Select an option

  • Save jsquire/e489c476276453b127adc4995e4cf920 to your computer and use it in GitHub Desktop.

Select an option

Save jsquire/e489c476276453b127adc4995e4cf920 to your computer and use it in GitHub Desktop.
Creating an Azure library for .NET: Where does it hurt?

Creating an Azure library for .NET: Where does it hurt?

Client TypeSpec (client.tsp) errors

The TypeSpec tooling is focused on the core TypeSpec language, ensuring that the specification is syntactically valid and well-formed. However, despite the client.tsp being a critical component of Azure library generation, it lacks the same level of validation and analysis.

Challenges

  • TypeSpec authors do not have advance warning or insight when the client.tsp is malformed or contains an error that makes it invalid.

  • TypeSpec compilation includes the client.tsp file and when it has errors, the process fails with errors that are often unclear, before the emitters/generators take over.

Proposed mitigations

Extend the TypeSpec linter experience

Add awareness client.tsp and its syntax to the TypeSpec linters. This ensures that the end-to-end TypeSpec authoring experience is consistent, with authors having the game guidance for producing syntactically valid and well-formed specs.

.NET Generator Agent applies automatic fixes

As part of the code generation process, the .NET generator agent run the TypeSpec compiler and will observe the client.tsp failures and attempt to automatically apply fixes for them, if possible.

.NET rule violations

The names in the authored TypeSpec are correct in that follow the TypeSpec goal of being an accurate abstract representation. When running code generation, the .NET stage fails because the service name conflicts with .NET guidance. For example, the service defines Response but the .NET validation fails with a suggestion to rename to Result.

Challenges

  • Changing the name in the TypeSpec is not helpful, because my service API intentionally calls this Response and other languages such as Python prefer Response.

  • It is not clear that it is possible to apply a client-only rename in client.tsp because authoring is focused on pure TypeSpec validation and linters advise only on TypeSpec syntax and structure. Documentation for client concepts is kept separate in an Azure-focused site and the concepts do not appear in the official TypeSpec documentation.

  • Naming suggestions are generalized and not universally applicable. The architects prefer that these names are not automatically transformed during code generation so that they can be customized via client.tsp or metadata techniques in library code.

Proposed mitigations

Extend the TypeSpec linter experience

Add awareness of language-specific rules - such as naming - are evaluated as the spec is written. This enables making authors aware of issues as part of the authoring workflow and allows them to accept "light bulb" suggestions which can automatically apply the suggested fix to client.tsp.

When the linter validation is clean, code generation is expected to produce a client library without rule violations and an API View upload to enable next steps.

.NET Generator Agent applies automatic fixes

As part of the code generation process, the .NET generator agent will observe rule failures and attempt to automatically apply fixes for them, if possible. The adjustments made will be based on the rule suggestions and may not be the final version that meets architect approval, but they will be sufficient to complete generation and unblock API View creation.

.NET API breaking changes

Changes to the REST API may cause breaking changes to the client API, even when they do not cause breaking changes in the REST API surface. In some cases, the spec changes may not change the REST API shape at all, but would introduce breaking changes into the .NET client API.

Challenges

  • Understanding the impact of REST API changes to the client surface is not always straightforward, as it requires specific knowledge of the implicit assumptions that each language emitter/generator makes when interpreting the TypeSpec.

  • TypeSpec allows for different ways to express the same concept - for example, union syntax versus the enum keyword. While these are often interchangeable for the service representation, they represent different forms in generated code for some languages.

  • Simple changes, such as fixing a name typo, may require client overrides in client.tsp to avoid breaking changes.

  • Some spec changes require custom code to preserve the existing API shape, which cannot be expressed in TypeSpec nor the client overrides in client.tsp. This leads to a chicken-or-the-egg problem in that code generation must be run and fail in order to remediate.

Proposed mitigations

Extend the TypeSpec linter experience

  • Add awareness of TypeSpec constructs which are likely to cause brittle client API shapes as the spec is authored and can be flagged with guidance and "light bulb" suggestions.

  • Add awareness of the most recent API shape, if one exists, and analyze the impact of spec changes as they are written. Changes which would result in breaking changes are flagged with guidance and "light bulb" suggestions.

  • If a breaking change in the spec cannot be directly remediated in the TypeSpec, the linter guidance should clarify that an attempt to fix will be attempted during code generation.

.NET Generator Agent applies automatic fixes

As part of the code generation process, the .NET generator agent will observe breaking change failures and attempt to automatically apply fixes for them, if possible. The adjustments made will be based on standard .NET patterns for preserving back-compat and avoiding binary breaking changes.

If this is unable to resolve the issues, generation will proceed with the breaking changes in place. This will block builds and API View creation, but allows for human intervention to resolve the breaking changes.

.NET rule violation messages lack context

The analyzers that power the .NET rules have never had focused investment and, as a result, are of uneven quality. Some messages are well-written and expressive, explaining what went wrong. Others are not well written and do not adequately communicate the failure.

Challenges

  • The analyzers were originally intended to aid Azure library developers in their IDE, and assume that their request to highlight and draw focus to the failure context took place. This is a poor experience for the command line and pipelines.

  • Messages were written over a period of time, generally as someone's "as time allows" efforts to improve the quality of our libraries. As a result, the messaging is not consistent, often assumes context, and they lack a consistent voice and format.

  • Even the well-written messages explain what failed and suggest renames/adjustments - but to not provide information about how to make these changes. They assume a hand-written experience in an IDE and do not cover TypeSpec and client.tsp scenarios.

Proposed mitigations

  • Invest in an overhaul of the .NET analyzers which refines the messages to ensure that they use a consistent voice and provide the full context needed for all situations.

  • Evaluate the failure messages and experience in the IDE, local command line, and pipelines to ensure a consistent experience that is clear and useful in all environments.

  • Add governance which ensures that changes over time continue to adhere to the quality bar and follow the patterns established as part of the overhaul.

Generated samples, if any, are poor

When developers look over samples, they are doing so with the perspective of trying to accomplish a specific task. For example, "I want to upload a blob to Storage." To accomplish this, they'll need to apply several operations:

  1. Create the needed credential for authorization
  2. Create the client
  3. Create a container for the blob, if one does not exist
  4. Upload content to a blob in that container

The generator, however, generator has only an operation-level awareness when emitting code. This means that the samples it creates are rudimentary and focused on a single operation rather than an end-to-end developer scenario.

Challenges

  • Developers are frustrated often need to open issues to ask follow-up questions on samples because they lack important context for accomplishing a task.

  • Developers fall back to external sources or MS Learn articles which may or may not be current and focused on the right library generation.

Proposed mitigations

We need new tooling that can consider the champion scenarios for a library and decompose those into samples at the appropriate level to demonstrate the full end-to-end flow. While the champion scenarios are expected to be the same across languages, the resulting sample content will need to follow the patterns, format, and standards for each language.

Generated API documentation focuses on REST semantics

As part of the generation process, documentation from the REST API spec is read by the generator and transformed into the correct format to serve as API documentation for the client types and operations. The transformation is largely structural, concentrating on the format and lacking awareness of the concepts and context of the documentation.

Challenges

  • The REST API spec documentation discusses concepts and operations from the perspective of a REST service. It assumes that context with assumptions around visibility of HTTP concepts, using terminology appropriate to the service. While the client provides the same operations, the context and semantics are not the same.

  • The generator has an awareness of the direct members associated with documentation, allowing it to apply renames and other transforms related to the structure of client code. However, the content of REST API docs often makes inline references to other service members or concepts, which are not in the current generator scope. As a result, documentation that reads well and makes sense for the service often feels disjointed and disconnected for developers using the client.

Proposed mitigations

We need new tooling that can evaluate the REST API documentation, the REST API structure, and the structure of the generated client types then meaningfully rewrite it for the client context. This would include not only the structural transforms needed, but a reframing of concepts in the client context and a focus on the perspective of a developer using the client library rather than a caller of the service.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment