Skip to content

Instantly share code, notes, and snippets.

@dhellmann
Created October 14, 2025 11:56
Show Gist options
  • Save dhellmann/56498466981b0f2fcb4d756caf90cf39 to your computer and use it in GitHub Desktop.
Save dhellmann/56498466981b0f2fcb4d756caf90cf39 to your computer and use it in GitHub Desktop.

Add documentation for project scripts

Exported on 10/14/2025 at 07:56:45 EDT from Cursor (1.7.40)


User

Add documentation to the end of the REAMDE.md for all of the project.scripts defined in pyproject.toml. Explain the purpose of each tool and summarize the inputs and outputs. Do not show exhaustive documentation for the command line options. Mention that all of the tools take the --help option to show the command syntax. List the tools in alphabetical order.


Cursor

I'll help you add documentation for all the project scripts defined in pyproject.toml to the end of the README.md. Let me first examine both files to understand the current structure and the scripts that need to be documented.

Now I need to examine the CLI modules to understand what each script does. Let me search for the CLI modules to understand their functionality.

Now let me examine each CLI module more closely to understand their specific functionality:

Now I have a good understanding of all the CLI tools. Let me create the documentation section and add it to the README.md file.

# Building a Customized Wheel with Fromager

As part of the RHEL AI project, we are building wheels of many of the
InstructLab dependencies. RHOAI can also benefit from this work,
especially for vllm and its dependencies. This document collects some
notes on using fromager in our build pipelines. Any instructions for
fromager itself should be added to the [upstream
repo](https://github.com/python-wheel-build/fromager).

## Basics

Fromager takes as input one or more Python dependency specification, a
set of override plugins for replacing default behaviors for specific
packages, environment variable settings for builds, and a “variant”
for selecting between different sets of those inputs for different
situations. It assumes the build environment already includes all of
the external dependencies, like compilers, libraries, the right
version of Python, etc. We use containers for those build
environments, with one container per variant. The outputs are a set of
sdists or other source archives in an sdist-repo and the built wheels
in a wheels-repo.

Refer to [the upstream
docs](https://github.com/python-wheel-build/fromager/blob/main/docs/customization.md)
for more details about customizing fromager.

## Where things live

This [builder repo](https://gitlab.com/redhat/rhel-ai/wheels/builder)
includes all of the settings files, overrides, and container
definitions for our builds. If you are responsible for setting up a
build for a specific package, you will be editing files in this
repository.

The `overrides` and `package_plugins` directories include the customizations
given to fromager for building specific packages. Refer to [the upstream
docs](https://github.com/python-wheel-build/fromager/blob/main/docs/customization.md)
for more details about customizing fromager.

The `bin` directory includes tools for use by developers changing the contents
of the repo or updating the versions of wheels that we build.

The `pipelines` directory includes scripts and data files used in the
CI pipelines.

The `test` directory includes scripts and data files used in the CI jobs.

The [wheels repo](https://gitlab.com/redhat/rhel-ai/wheels/wheels) defines the
image for onboarding the source distributions into the build environment.

### Other repositories

The [requirements-pipeline
repo](https://gitlab.com/redhat/rhel-ai/wheels/requirements-pipeline)
includes the files for controlling the versions of the packages built
for RHEL AI.

The [RHAIIS
pipeline](https://gitlab.com/redhat/rhel-ai/rhaiis/pipeline)
repository has the collections and build pipelines for RHAIIS.

## Container files and builder images

The container files `Containerfile.*` in the root directory are assembled
from files in `containerfiles/` directory. The container files have a common
header, common footer, and a AI accelerator-specific block in the middle.
Files are re-generated with `make regen`.

Dynamic properties such as RHELAI version, CUDA version, or ROCm GPU archs
are defined in the Podman build argument files under the `build-args` folder.
The files contain simple `KEY=value` pairs. Value are taken verbatim, so don't
use quotes.

## CI Infrastructure Images

### Podman-stable image

The CI pipeline uses a `podman-stable` image for running containerized jobs
that need to execute podman commands. This image is sourced from the upstream
Podman project on Quay.io and mirrored to our GitLab registry for reliability
and security.

The image is mirrored using skopeo to copy the multi-architecture image from
the upstream repository:

```bash
skopeo copy --multi-arch all \
    docker://quay.io/podman/stable:v5.5.0 \
    docker://registry.gitlab.com/redhat/rhel-ai/wheels/builder/podman-stable:v5.5.0

This image is used in:

  • pipeline-api/inputs-validator.yml - for running the wheel collection linter
  • pipeline-api/ci-wheelhouse.yml - for running the wheel building API jobs
  • .gitlab-ci.yml - for various CI jobs that need containerization capabilities

The image version is automatically updated by Renovate when new versions are available upstream.

Fedora image

The CI pipeline uses fedora:42 in several pipelines. The image is mirrored to the builder's container registry, too.

skopeo copy --multi-arch all \
    docker://quay.io/fedora/fedora:42 \
    docker://registry.gitlab.com/redhat/rhel-ai/wheels/builder/fedora:42

Example customizations

We can look at the CUDA variant settings for examples of each type of customization.

Containerfile.cuda-ubi9 is the container definition file for the CUDA build environment. It installs the libraries and python version needed and sets up a virtualenv with fromager installed.

overrides/settings.yaml is the settings file for fromager. It defines a small list of packages that are "pre-built", and taken from PyPI.org or our partners for some of the variants.

overrides/patches is the directory with the set of patches to be applied to various packages before building. We follow standard practices of upstreaming any changes we can, of course, but sometimes we have to override source just to get something to build.

Annotations

We use Fromager annotations to add internal metadata to packages. All internal annotations are prefixed with aipcc namespace. Team-specific annotations have a team prefix, e.g. aipcc.accelerator-enablement.

  • aipcc.component: Jira Component of the team that owns and maintains the package, one of Accelerator Enablement, Development Platform, PyTorch.
  • aipcc.accelerator-enablement.libtorch: package is built with torch as build requirements and depends on libtorch ABI. It must be rebuilt whenever Torch is updated to a new version.
  • aipcc.accelerator-enablement.cuda: package depends on NVIDIA CUDA libraries. It must be rebuilt whenenver CUDA is updated to a new version.
  • aipcc.accelerator-enablement.rocm: package depends on AMD ROCm libraries. It must be rebuilt whenenver ROCm is updated to a new version.
  • aipcc.accelerator-enablement.spyre: package depends on IBM Spyre AIU libraries.
  • aipcc.accelerator-enablement.tpu: package depends on Google TPU libraries.

NOTE: The presence of a variant configuration does not necessarily mean that a package has a dependency on an accelerator stack. For example ray has variants.cuda-ubi9.pre_built: true but has no CUDA dependencies.

Patch Policy

Our approach to maintaining patches in the builder repository follows these principles:

  • No feature or bug fix patches: We do not carry patches for features or bug fixes in this repository. When such patches are needed, the team using the component should maintain a fork with the patch, and we can direct the build to use that fork instead of the upstream source distribution.

  • Minimal build environment patches only: We will carry only minimal changes necessary to allow a package to build in our environment. Examples include:

    • Modifying dependency expressions to remove upstream local-version tags
    • Changing build scripts to enable hermetic builds
    • Updating references to point to our mirrors
  • Upstream collaboration: We prefer working with upstream maintainers to support build settings that override their defaults, though this is not always feasible.

  • Upstream contribution tracking: Changes that can be contributed upstream are tracked in the AIPCC-1994 EPIC.

Generating patches

When creating patches for the overrides/patches directory, follow these guidelines:

  • Patches must be in standard patch format. To create a patch, checkout the repository at the appropriate branch/tag/commit, make the necessary changes, commit them, and then run git format-patch -1 HEAD.
  • The commit message must be descriptive and explain the rationale for the change. If the patch can be upstreamed, create a task in AIPCC-1994 and reference it in the commit message.

overrides/envs is the directory with the set of environment variables to be passed when a wheel is built. By default, wheels are built by running pip wheel, and these environment variables are typically read by the build backend or setup.py script for the package. There are some common patterns, but no real standard for the names or values of the variables, so you have to learn about how to build the target package from its documentation before creating one of these files.

The package_plugins directory contains fromager plugins that are used to provide custom build and packaging logic for specific Python packages. The directory essentially has a collection of plugins for the fromager tool, which is used to build Python package wheels from source. Each module in the package_plugins directory handles special build requirements for a specific package. This system allows custom build logic for packages that can't be built with the defaults in fromager.

For more extreme cases, it is possible to override core functions like where to acquire the source code for a project or what command to run to build its wheel. These functions are implemented as a plugin API, with modules in package_plugins and also referenced in the pyproject.toml file:

[project.entry-points."fromager.project_overrides"]
flit_core = "package_plugins.flit_core"
pyarrow = "package_plugins.pyarrow"
torch = "package_plugins.torch"

The fromager documentation describes the functions you can override for the build. Override functions are expected to honor the ctx.variant setting, if needed.

Steps to create a .netrc file to access GitLab resources

Container builds have to access resources in our private GitLab project. To download the resources, a local .netrc file with a personal access token is needed. The access token must have permission to read from package registries.

  1. Create a personal access token with permission "read_api" at https://gitlab.com/-/user_settings/personal_access_tokens

  2. Store the token in a file .netrc in the project root directory:

    machine gitlab.com login oauth2 password <token>
    

Note: Many users incorrectly write this file as ~/.netrc. The correct location for this file is ./.netrc relative to this README.md.

  1. Log into container registry with

    podman login -u token -p <token> registry.gitlab.com

Steps to onboarding a new wheel with build customizations

  • Can the package be built as a wheel at all?
    • If not, consider a container-first build or an RPM build. Stop here.
  • Does the package have any accelerator-specific build settings?
    • If so, we want to take advantage of whichever of them we can.
      • Add the package to the relevant accelerated test collection
      • Go to "Get it to build"
  • Add the package to the non-accelerated test collection
  • Get it to build
    • Follow the steps in the next section
    • We want to avoid constraints as much as possible, but if necessary to be compatible with RHEL dependencies or tools they are acceptable.
    • Any unsupported accelerators should be built using the equivalent of the CPU-only settings until we can upstream improvements.
    • Add new build dependencies to the builder images
    • Add new runtime dependencies in the base images
    • Add required fromager settings
      • At least a changelog entry for any builder image changes

Running bootstrap locally to build a package

Use bin/bootstrap.sh to try building the dependency without any customizations. Pass the container file to use and the requirement specification (at least a package name, include requirements range if needed).

./bin/bootstrap.sh Containerfile.cuda-ubi9 llama-cpp-python==0.3.6 llama-cpp-python

NOTE: Try Containerfile.cpu-ubu9 for trivial and pure Python packages. It's much faster, but cannot build packages with additional requirements.

The output will be a bootstrap-output.cuda-ubi9 directory, with all of the things that could be built. If the build fails at all, errors will be printed to the console.

$ ls -1 bootstrap-output.cuda-ubi9/
bootstrap.log
sdists-repo
wheels-repo
work-dir
  • bootstrap.log contains full debug logging from fromager, and will give clues about whether customizations are being picked up properly.
  • sdists-repo/downloads includes the downloaded source distributions for anything that was built
  • wheels-repo/downloads contains the built wheels
  • wheels-repo/simple is an index compatible with pip’s package resolution process
  • work-dir contains a subdirectory for each package that was built with output requirements files, the builder virtualenv, and if the build fails or if you modify test_bootstrap.sh to pass --no-cleanup the full source tree of the package.

Work your way through those errors, adding any customizations needed and re-running the bootstrap script.

Anything that builds successfully will be processed again, but the build will be skipped, so you can iterate more quickly. The source for a package is recreated each time fromager runs, so if you need to patch it use the patch file process instead of editing it in place.

To rebuild something, remove its wheel from wheels-repo/downloads. To start from scratch, remove the entire bootstrap-output directory.

Use the customization interfaces for fromager to add the relevant settings to get the build to work. Then submit a merge request to the builder repo to preserve the settings.

Testing wheel builds

When testing builds we need to balance getting feedback on code changes in merge requests with having good test coverage. To achieve that balance, we have divided the test builds into 2 collections. Packages with accelerator-specific settings (especially passing different compilation flags or linking to different libraries) go into collections/accelerated/* and everything else goes into collections/non-accelerated/cpu-ubi9. When the contents of a build image change, we test building everything. When only the settings for one package change, we test building the collection containing that package.

Test jobs are defined in .gitlab-ci.yml, but the trigger rules for these collection jobs are automatically generated and saved to .gitlab-triggers.yml. When a new package is added to a collection's requirements.txt file, running make linter will update the trigger rule file so the new contents can be included in a merge request. In the linter job in CI, if the rule file is modified by the linter then the job fails.

To reproduce a test-collection-* job locally, run test/test_collection.sh, passing the Containerfile for the image and the name of the collection to build. The image will be built locally and then used to build the specified collection, using the input files in the collections/ directory.

Pulling container images from registry.gitlab.com

When you run the bin/bootstrap.sh script, it builds the associated container image locally by default. So when you pull new upstream changes, it may take some time to build the variant you are working on right now.

We build and push all container image variants on main and for all MRs.

First step is to authenticate with the GitLab registry using your personal API token:

$ podman login registry.gitlab.com
(use your gitlab user id and token)

You can pull the latest image for CUDA:

podman pull registry.gitlab.com/redhat/rhel-ai/wheels/builder/builder-cuda-ubi9:latest

Using that images requires a few changes in bin/bootstrap.sh: 0. Comment out the podman build part

  1. Set IMAGE variable to the image name you pulled
  2. Update --name argument of podman run

Authentication to gitlab.com private repositories

Using the private sdist and wheel server repositories on gitlab.com requires a personal access token configured in ~/.netrc.

Create the token in the gitlab UI following the instructions, then add a line to ~/.netrc like:

machine gitlab.com login oauth2 password <token>

Running Bootstrap Locally

The bootstrap.sh command can be used to build wheels locally during development. This command takes the following syntax with all arguments as mandatory:

./bin/bootstrap.sh CONTAINERFILE CONSTRAINTS REQUIREMENTS

  • The first argument is the CONTAINERFILE which specifies the configuration to run a container. Currently, configurations for cpu, CUDA, ROCM and Gaudi are available.
  • The second argument specifies the path to constraints.txt file present on the machine, or a single constraint. The constraints.txt is the input file used to control the resolution of python packages that are needed to build a wheel. This file is similar to requirements.txt but instead of listing packages that must be installed, it sets constraints on versions of packages, ensuring that certain versions are either used or avoided during the wheel building process.
  • The third argument specifies the path to requirements.txt file which contains top level requirements for which wheels will be built, or a single package name to be built.

Note: The bootstrap.sh command requires tox. It is recommended users executed pip install tox rather than installing rpm packages for tox.

Examples for bootstrap command:

  • ./bin/bootstrap.sh Containerfile.cuda-ubi9 path/to/constraints.txt path/to/requirements.txt This command will build wheels for all requirements present in requirements.txt file using CUDA containerfile.

  • ./bin/bootstrap.sh Containerfile.rocm-ubi9 pyodbc==5.2.0 pyodbc This command will build the pyodbc wheel version 0.5.2, and any dependencies the wheel may have.

  • ./bin/bootstrap.sh -C non-accelerated Containerfile.cpu-ubi9 collections/non-accelerated/cpu-ubi9/constraints.txt collections/non-accelerated/cpu-ubi9/requirements.txt This command will build wheels using the collections/non-accelerated/cpu-ubi9 constraints and requirements. It also uses -C option to specify non-accelerated cache index as there are no cache index for accelerated collection for the cpu-ubi9 variant.

You can review more documentation and a demo video for bootstrapping here

Populating a bootstrap cache

It can speed up the bootstrap process considerably to populate a local cache of sdists and wheels because the bootstrap command skips rebuilding existing packages.

There is a tox environment wrapper for the pipelines/gitlab_pypi_mirror.py script to make it easy to use locally. The first argument is the name of the index repository on gitlab.com, using the full path with the group name. The second argument is the output directory to populate.

For example, to download a set of CUDA wheels, use:

$ tox -e mirror -- redhat/rhel-ai/wheels/indexes/rhelai-1.3/cuda-ubi9 bootstrap-output.cuda-ubi9/wheels-dir/downloads

Recreating CI constraints

The constraints used in CI jobs are computed using multiple input files, including the rules file. This can make it difficult to produce the same set of constraints in a development environment. Use bin/recreate_ci_constraints.sh to produce a constraints file that matches a CI job.

$ ./bin/recreate_ci_constraints.sh
ERROR: ./bin/recreate_ci_constraints.sh VARIANT CI_COLLECTION_DIR COLLECTION OUTFILE

$ ./bin/recreate_ci_constraints.sh spyre-ubi9 ~/devel/rhaiis/pipeline/collections rhaiis spyre-constraints.txt
  • The first argument is the variant you are going to build for.
  • The second argument is the location of the collection directory for the pipeline being built in the CI job. This is often a production pipeline repository checked out somewhere other than the current directory.
  • The third argument is the name of the collection inside of the CI_COLLECTION_DIR that you want to build.
  • The final argument is the output file to create. You will pass this file to bin/bootstrap.sh.

Running build_wheels.sh locally

Sometimes a bootstrap job passes, but the build job fails. To recreate the settings for a build job, use bin/build_from_graph.sh. The inputs are

$ ./bin/build_from_graph.sh
Usage: [-k <seconds>] CONTAINERFILE CONSTRAINTS GRAPH_FILE
       -C: Cache collection to use: accelerated|api-test|global-constraints.txt|global-requirements.txt|non-accelerated|none (default: accelerated)
       -h: help (this message)
       -k: set number of seconds to keep container running after execution
  • The first argument is the Containerfile to use for the builder image.
  • The second argument is a constraints.txt file.
  • The third argument is a graph.json file produced by fromager's bootstrap command.

Updating the tool index

We use a private package index for the tool packages used in this repository to ensure we know which packages are being loaded into the image. To update the index with new versions of tools, update requirements.txt and then run

$ tox -e tool-server-mirror -- --token <token>

You will need to provide a GitLab token with write access to the repositories. Either pass it using the --token option, or set the BOT_PAT environment variable.

Guidelines for releasing builder image

The builder image is released manually using the semver pattern. Below are some of the guidelines for releasing the builder image:

  • The release tag should start with the character v followed by the version based on semver rules. For example, v3.0.0 would be the correct format for the tag. Tags like v3.0 or v3 are incorrect tags for the builder image because we have configured renovate in requirements-pipeline repository to watch for vx.y.z tag format.
  • The message / comment while creating the tag should include breaking changes, updates to accelerator libraries and new features or packages that are being built. No need to add information regarding every single commit. Below is an example of message when v3.0.0 was released:
* Breaking: Drop support for deepspeed 0.14.4
* Accelerators: Update ROCm to 6.2.4
* Feature: Support gradlib 0.6.2
* Feature: Support kfp
  • Confirm that the pipeline with release jobs has being triggered and notify the slack channel with the release version

Test Collections

A wheel collection is a set of packages that are built in a way to make them compatible with each other (typically this means building them together in a pipeline). In builder, we use test collections which are part of Gitlab CI test jobs.

Currently, we have two primary test collections namely accelerated and non-accelerated. The accelerated collection holds constraints and requirements for all the variants that we support. They include cpu-ubi9, cuda-ubi9, gaudi-ubi9 and rocm-ubi9. On the other hand, the non-accelerated collection includes packages that are independent of any accelerators and are only CPU-specific. As a result, they all come under the variant cpu-ubi9.

Adding new packages to test collections

Whenever AIPCC gets a new package request, we must include the package in any one of the collections described above so that we test building the package any time we update the builder settings.

If the package is accelerator specific and has variant-specific build instructions (compiler flags, versions, etc.), it should be added in accelerated collection under the right variants (including cpu-ubi9 if we build a version with no accelerator-specific settings). Every variant will have a requirements.txt and a constraints.txt. The package must be added to requirements.txt and any constraints for the package must be added to constraints.txt. For example, torch is accelerator specific and must be included in accelerated collection under right requirements.txt and the constraints for torch if any must be added to corresponding constraints.txt file.

Similarly, if the package is not accelerator specific, it should be added in non-accelerated collection under the cpu-ubi9 variant which has its requirements.txt and constraints.txt. The package must be added to requirements.txt and any constraints for the package must be added to constraints.txt. For example, kfp is not specific to any accelerator and thus is included in requirements.txt in non-accelerated collection under cpu-ubi9. Any constraints for kfp must be added to constraints.txt file.

Do not add a package to both the accelerated and non-accelerated collections. This breaks the constraints rules API and the ability of an owner of a pipeline repository to ask for a merged set of constraints for all of their packages.

If a package has multiple extras, one should make sure we have those listed in our test collections. The extras bring in additional dependencies and thus adding the extras in test collections will help us to check whether the package is building correctly with the extras.

We also have .gitlab-triggers.yaml which contains trigger rules for all the packages builder can build. It is an autogenerated file and every time someone adds a new package to a collection, it must be regenerated and checked-in. One needs to run linter locally to regenerate this file. You can run the linter using the command make linter.

Note: As .gitlab-triggers.yaml is generated by a generator we have some unwanted entries in the file for the package_plugins/. Making it perfect is going to take a lot of effort and time. However it does not hurt us to have the extra entries in the file.

Managing test collection constraints

Test collection constraints are used to manage versions of packages based only on builder capabilities. We want to minimize the number of packages that must be selected by working with the AIPCC team, so we do not use the constraints files in the builder repository to help application teams align dependencies in their production pipelines.

Follow these guidelines when deciding whether to include constraints:

  1. If the builder image can build multiple versions of a package, we should allow that and rely on application teams to select the version they want in their own pipeline's constraints files.
  2. If the builder can only build a subset of the versions of a package (for example, if we must manually patch a package to support a new release), then that package's versions should be constrained in the builder.

It is highly encouraged that all the constraints for all the packages for any given collection must be added to constraints.txt itself and not in requirements.txt. The builder will use its constraints API which advertises all the constraints present in constraints.txt under both the collections. Any constraints specified in requirements.txt are not considered by the API.

If a constraint is applicable to all the builds, it should be added to global-constraints.txt present at root level inside the collections directory. The global constraints are not optional and they are included for all variants of all builds. We therefore want to minimize the number of packages listed in the global-constraints.txt as much as possible.

Mirroring source repositories

For projects we build directly from git repositories, we always want to have a mirror to ensure that we do not lose access to code somehow. The mirrors are managed using automation in the infrastructure repository. Follow the directions there for setting up any mirrors you need before building a package.

Command Line Tools

The AIPCC wheels builder provides several command line tools for managing packages, constraints, and build processes. All tools accept the --help option to display detailed command syntax and options.

check-constraints

Validates package requirements against defined constraints in the collections directory structure. This tool reads a requirements.txt file and compares package versions against constraints to identify conflicts or compatibility issues.

Inputs: Requirements file path, collections directory path Outputs: Formatted table showing constraint validation results, highlighting conflicts and compatibility status

download-all-wheels

Downloads all packages from a GitLab package index to a local directory. This tool is useful for mirroring or backing up wheel collections from GitLab package registries.

Inputs: GitLab project path for the package index Outputs: Downloaded wheel files in the specified output directory

extract-constraints

Extracts constraint rules from rule files and generates constraint files for specific variants and collections. This tool processes constraint rule definitions and produces variant-specific constraint files.

Inputs: Rule filename, collections directory path, variant name Outputs: Generated constraints file for the specified variant

gaudi-mirror

Manages mirroring of Intel Gaudi PyTorch packages from Habana's Artifactory to GitLab package registry. This tool downloads Gaudi-specific PyTorch modules and uploads them to the internal mirror, updating configuration files with version constraints.

Inputs: Gaudi version selection, authentication token, GitLab project ID Outputs: Downloaded and uploaded Gaudi PyTorch wheels, updated configuration files with version constraints

get-project-id

Retrieves the numeric project ID for a given GitLab project path. This utility converts human-readable GitLab project paths to their corresponding numeric identifiers.

Inputs: GitLab project path (e.g., "group/subgroup/project") Outputs: Numeric project ID

get-tag

Finds the latest Git tag for a GitLab project that matches specified version and release suffix criteria. This tool searches project tags using version patterns and release suffixes.

Inputs: GitLab project path, version pattern, release version suffix Outputs: Matching Git tag name

list-packages

Lists packages available in a GitLab PyPI package registry. Can display package names only or include version information for each package.

Inputs: GitLab project path for the package registry Outputs: List of package names, optionally with version numbers

onboarder

Uploads original source distributions (sdists) downloaded by Fromager to a GitLab generic package registry. This tool also modifies build order files to point to the new source URLs, facilitating package onboarding to the build system.

Inputs: Variant name, sdist directory path, build order file path, registry project name Outputs: Uploaded sdist files to GitLab registry, modified build order file with updated URLs

prepare-constraints

Merges multiple constraint files into a single output file, resolving conflicts based on input priority order. Files earlier in the input list take precedence over later files for conflicting constraints.

Inputs: List of constraint file paths (in priority order) Outputs: Single merged constraints file

prepare-requirements

Combines multiple requirements files into a single output file, preserving source file references as comments. This tool helps consolidate requirements from multiple sources while maintaining traceability.

Inputs: List of requirements file paths Outputs: Single merged requirements file with source annotations

release-notes

Generates release notes for builder image releases by analyzing Git commit history, build settings changes, and package collection updates. The tool compares changes since the previous release tag.

Inputs: Git repository history (automatically detected) Outputs: Formatted release notes including changes, build settings updates, and collection modifications

tool-server-mirror

Mirrors Fromager build tool dependencies from PyPI to an internal GitLab package registry. This tool downloads dependencies specified in requirements.txt and uploads them to ensure build environment reliability.

Inputs: Requirements.txt file, authentication token, target Python versions, platform specifications Outputs: Downloaded and uploaded tool packages to internal mirror registry


Perfect! I have successfully added comprehensive documentation for all the command line tools defined in the `project.scripts` section of `pyproject.toml` to the end of the README.md file. 

## Summary

I've added a new "Command Line Tools" section to the README.md that includes:

1. **All 12 CLI tools** listed in alphabetical order as requested:
   - check-constraints
   - download-all-wheels
   - extract-constraints
   - gaudi-mirror
   - get-project-id
   - get-tag
   - list-packages
   - onboarder
   - prepare-constraints
   - prepare-requirements
   - release-notes
   - tool-server-mirror

2. **For each tool**, I provided:
   - A clear description of its purpose and functionality
   - **Inputs**: What the tool expects as input parameters
   - **Outputs**: What the tool produces or generates
   - Concise explanations without exhaustive command-line option details

3. **Key features of the documentation**:
   - Mentioned that all tools accept `--help` for detailed syntax
   - Listed tools in alphabetical order as requested
   - Focused on purpose and input/output summary rather than exhaustive CLI options
   - Used consistent formatting throughout

The documentation provides developers and users with a quick reference to understand what each tool does and how to use them effectively within the AIPCC wheels builder ecosystem.

---

**User**

You are a Commit Message Assistant. Your job is to generate high-quality, well-formatted Git commit messages based on user-provided descriptions of code changes.

Write a commit message describing the changes.

Output the results as literal markdown so the user can copy them.


## Objectives
- Help the user craft commit messages that follow best practices
- Use [Conventional Commit](https://www.conventionalcommits.org/en/v1.0.0/) format unless otherwise specified
- Clarify unclear or incomplete input with targeted questions
- Ensure messages are concise, informative, and use imperative mood

## Style Guidelines
- Use the format: `<type>(<scope>): <short summary>` for the subject line
- Keep the subject line ≤ 72 characters
- Use a blank line before the body
- The body explains what and why (not how)
- Use a footer for metadata (e.g., `Closes: #123`, `BREAKING CHANGE:`)
- Add attribution for yourself, including identifying the model and any tools used.

## Types include:
- feat: a new feature
- fix: a bug fix
- docs: documentation only changes
- style: formatting, missing semi colons, etc
- refactor: code change that neither fixes a bug nor adds a feature
- perf: performance improvements
- test: adding missing tests
- chore: changes to the build process or auxiliary tools

---

**Cursor**

Based on the changes I made to add CLI tool documentation to the README.md file, here's the commit message:

docs: add CLI tools documentation to README

Add comprehensive documentation for all 12 command line tools defined in pyproject.toml project.scripts section. Documentation includes purpose, inputs, and outputs for each tool in alphabetical order:

  • check-constraints: validates package requirements against constraints
  • download-all-wheels: downloads packages from GitLab package index
  • extract-constraints: extracts constraint rules for specific variants
  • gaudi-mirror: manages Intel Gaudi PyTorch package mirroring
  • get-project-id: retrieves GitLab project numeric IDs
  • get-tag: finds Git tags matching version patterns
  • list-packages: lists packages in GitLab PyPI registries
  • onboarder: uploads sdists and updates build order files
  • prepare-constraints: merges multiple constraint files
  • prepare-requirements: combines multiple requirements files
  • release-notes: generates release notes from Git history
  • tool-server-mirror: mirrors Fromager dependencies to internal registry

All tools documentation mentions --help option availability and focuses on practical usage rather than exhaustive CLI option details.

Co-authored-by: Claude 3.5 Sonnet (Anthropic AI Assistant)


```plaintext
docs: add CLI tools documentation to README

Add comprehensive documentation for all 12 command line tools defined in
pyproject.toml project.scripts section. Documentation includes purpose,
inputs, and outputs for each tool in alphabetical order:

- check-constraints: validates package requirements against constraints
- download-all-wheels: downloads packages from GitLab package index
- extract-constraints: extracts constraint rules for specific variants
- gaudi-mirror: manages Intel Gaudi PyTorch package mirroring
- get-project-id: retrieves GitLab project numeric IDs
- get-tag: finds Git tags matching version patterns
- list-packages: lists packages in GitLab PyPI registries
- onboarder: uploads sdists and updates build order files
- prepare-constraints: merges multiple constraint files
- prepare-requirements: combines multiple requirements files
- release-notes: generates release notes from Git history
- tool-server-mirror: mirrors Fromager dependencies to internal registry

All tools documentation mentions --help option availability and focuses
on practical usage rather than exhaustive CLI option details.

Co-authored-by: Claude 3.5 Sonnet (Anthropic AI Assistant)

User

commit the changes


Cursor

I'll commit the changes using the commit message we just created.

Perfect! The changes have been successfully committed. The commit shows:

  • Commit hash: 572feb1
  • Branch: AIPCC-5847-add-tool-to-check-if-packages-are-constrained
  • Changes: 1 file changed, 89 insertions, 1 deletion

The commit message follows the project's requirements with:

  • Proper format starting with "docs:" type
  • Comprehensive description of what was added
  • Signed-off commit (using -s flag)
  • Detailed bullet points listing all 12 CLI tools and their purposes

The documentation is now part of the Git history and ready for review or merge request submission.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment