Releasing .NET Core is a complex process involving many components, dependencies on external systems and different teams. With the growing usage and success of .NET Core, it is critical that we are able to release with confidence. This requires us to take necessary steps to validate each release was completed as expected. Ideally we would always release the product without issues but that has not been the case so far. We need tests in place to identify issues so that they can be addressed as part of the release tic-toc before customers are impacted.
This document describes how to validate .NET Core releases. This document describes at a broad level what should be validated. It is not intended to describe the minute details.
The document breaks down the items to validate into the following broad groups.
- Acquisition Artifacts - The primary ways customers acquire the product. e.g. Native Installers
- Auxiliary Artifacts - Auxiliary artifacts used with product. e.g. NuGet Packages
- Discovery Mechanisms - The mechanisms customers discover new releases are available. e.g. .NET Web Site
When thinking about how to validate each part of the release, it is important to think about two aspects:
- Availability - Are the artifacts available?
- Integrity - Even though an artifact is available, how to we ensure it is the correct artifact for the release?
-
Native Installers (Windows/macOS/Linux)
Validation:
- Availability via DLC/AzLinux
- Simple version number checking feels appropriate. Installer unit tests should be validating installer correctness as well as covering upgrade scenarios.
- Availability via DLC/AzLinux
-
Docker Images
Validation:
- Expected images/tags exist for release
- Images contain correct versions
- Expected images/tags exist for release
-
Zips/Tarballs
Validation:
- Expected content exist on DLC and blob storage
- Validate published checksums against artifacts
- Expected content exist on DLC and blob storage
-
.NET Install Script
Validation:
- Latest per channel and version specific installations work
- Ensure dependent artifacts exist e.g. latest.version files
-
Snaps
Validation:
- Expected snaps are available via snap store
- Validate snaps contain the correct versions of the product
Note: The best investments here would be to automate the Snap building and publishing process.
- Expected snaps are available via snap store
-
Source Build
Validation:
- Ensure SDK and Runtime tags and tarballs exist
- Ensure distros can build
- Ensure tarballs have the right tags for all repos
- Make sure all commits in tarball are publicly available
- Ensure SDK and Runtime tags and tarballs exist
-
Antares
Validation:
TODO: Follow up with leecow on this. Is there value in validating the special antares blob storage publishing? It feels like there is limited value and the Antares team owns the verification of this.
-
NuGet Packages
Validation:
- Package availability on NuGet.org
Note: There is automation in place for this today. The problem in this space is that we have overlooked/missed packages on occasion. This is a classic case of garbage in garbage out. We need a better process around identifying the correct list of packages to publish. A potential mitigation would be to require explicit sign-off from the repo owners on what packages to publish. We also may consider creating baselines of the packages we publish. Comparing the current list of packages against the baseline will highlight differences that should be acknowledged.
-
Symbols
Validation:
- Symbol availability on msdl as well as //SymWeb
- Cross check with NuGet packages published to get the symbols list and validate they are available. Need to inspect non-released NuGets to get the full list.
- CoreCLR - have some tricky integrity requirements that are covered by manual tests/hard-coded knowledge
- Symbol availability on msdl as well as //SymWeb
-
Source Tagging
Validation:
- GitHub repos have been tagged appropriately
- Tags reference expected shas
- GitHub repos have been tagged appropriately
-
Validation:
- Release tables contain the release
- Current version links refer to the release
- Tutorials are correct and functional
-
Validation:
- Appropriate announcement exists
- Discussion issue exists
- dotnet/announcement issue is locked
- Labeled appropriately
- Appropriate announcement exists
-
Validation:
- Blog exists with appropriate content
-
Validation:
- Ensure the appropriate release notes have been created/updated appropriately
- READMEs
- releases-index.json/releases.json
- Version specific notes
- Known-issues
- Supported OS
- Ensure the appropriate release notes have been created/updated appropriately
-
Validation:
- Appropriate tags are listing in the Full Tag Listing within the descriptions
-
Validation:
- Appropriate versions are listed in the versions list
-
How should these release tests get run?
- Where ever it is feasible, automation should be created.
- Humans are a good choice in certain areas such as the discovery mechanisms. The human element can add a lot of value to help ensure the best UX for our customers.
-
How do we improve the testing?
Like with any type of testing, any time there is an issue with a release that is not surfaced via the release tests, we should be asking ourselves "how was this missed". We should be adjusting this test plan appropriately to account for these issues to ensure we can detect them going forward.