Exported on 10/9/2025 at 18:08:01 EDT from Cursor (1.7.40)
User
Reformat this file as a python requirements.txt file with packagename==version. For example, the first line should read "aiohappyeyeballs==2.6.1". Make similar changes to each line of the file.
Cursor
I'll help you reformat the file to a proper Python requirements.txt format. Let me first read the current file to see its structure.
Now I'll reformat this file to the proper Python requirements.txt format with packagename==version
:
aiohappyeyeballs==2.6.1
aiohttp==3.12.15
aiosignal==1.4.0
annotated-types==0.7.0
anyio==4.10.0
astor==0.8.1
attrs==25.3.0
backoff==2.2.1
blake3==1.0.5
cachetools==5.5.2
cbor2==5.7.0
certifi==2025.8.3
cffi==2.0.0
charset-normalizer==3.4.3
click==8.2.1
cloudpickle==3.1.1
compressed-tensors==0.10.2
depyf==0.19.0
dill==0.3.8
diskcache==5.6.3
distro==1.9.0
dnspython==2.8.0
einops==0.8.1
email-validator==2.3.0
fastapi==0.116.1
fastapi-cli==0.0.11
fastapi-cloud-cli==0.1.5
filelock==3.19.1
frozenlist==1.7.0
fsspec==2025.3.0
gguf==0.17.1
h11==0.16.0
hf-xet==1.1.9
httpcore==1.0.9
httptools==0.6.4
httpx==0.28.1
huggingface-hub==0.34.4
idna==3.10
interegular==0.3.3
its-hub==0.2.4
jinja2==3.1.6
jiter==0.10.0
jsonschema==4.25.1
jsonschema-specifications==2025.9.1
lark==1.2.2
llguidance==0.7.30
llvmlite==0.44.0
lm-format-enforcer==0.10.12
markdown-it-py==4.0.0
markupsafe==3.0.2
mdurl==0.1.2
mistral-common==1.8.4
mlx==0.29.2
mlx-lm==0.28.2
mlx-metal==0.29.2
mpmath==1.3.0
msgpack==1.1.1
msgspec==0.19.0
multidict==6.6.4
networkx==3.5
ninja==1.13.0
numba==0.61.2
numpy==2.2.6
openai==1.90.0
opencv-python-headless==4.12.0.88
outlines-core==0.2.10
packaging==25.0
partial-json-parser==0.2.1.1.post6
pillow==11.3.0
prometheus-client==0.23.0
prometheus-fastapi-instrumentator==7.1.0
propcache==0.3.2
protobuf==6.32.0
psutil==7.0.0
py-cpuinfo==9.0.0
pybase64==1.4.2
pycountry==24.6.1
pycparser==2.23
pydantic==2.11.7
pydantic-core==2.33.2
pydantic-extra-types==2.10.5
pygments==2.19.2
python-dotenv==1.1.1
python-json-logger==3.3.0
python-multipart==0.0.20
pyyaml==6.0.2
pyzmq==27.1.0
ray==2.49.1
referencing==0.36.2
regex==2025.9.1
requests==2.32.5
reward-hub==0.1.2
rich==14.1.0
rich-toolkit==0.15.1
rignore==0.6.4
rpds-py==0.27.1
safetensors==0.6.2
scipy==1.16.2
sentencepiece==0.2.1
sentry-sdk==2.37.1
setuptools==79.0.1
shellingham==1.5.4
six==1.17.0
sniffio==1.3.1
soundfile==0.13.1
soxr==1.0.0
starlette==0.47.3
sympy==1.14.0
tiktoken==0.11.0
tokenizers==0.21.4
torch==2.7.1
torchaudio==2.7.1
torchvision==0.22.1
tqdm==4.67.1
transformers==4.53.2
typer==0.17.4
typing-extensions==4.15.0
typing-inspection==0.4.1
urllib3==2.5.0
uvicorn==0.35.0
uvloop==0.21.0
vllm==0.10.0
watchfiles==1.1.0
websockets==15.0.1
xgrammar==0.1.21
yarl==1.20.1
Perfect! I've successfully reformatted the file to use the proper Python requirements.txt format. Each line now follows the packagename==version
format as requested. For example:
aiohappyeyeballs==2.6.1
(wasaiohappyeyeballs 2.6.1
)aiohttp==3.12.15
(wasaiohttp 3.12.15
)- And so on for all 126 packages in the file
The file is now in the standard Python requirements.txt format that can be used with pip install -r its-hub-requirements.txt
or similar package management tools.
User
Add a python program to package_plugins/cli called check-constraints that will read a requirements.txt file specified on the command line and look through all of the constraints.txt files under the collections directory to find places where the same package has a constraint. The output should be a table with columns for the input package name, the original requirement, the constraint for each collection and whether the requirement matches that constraint. If a collection has no constraint the cell should be empty.
Cursor
I'll create a Python program called check-constraints
in the package_plugins/cli
directory that analyzes requirements against constraints files. Let me start by exploring the directory structure and understanding the existing constraints files.
Now let me look at a few constraints files to understand their format:
Great! Now I understand the structure. The CLI directory already exists, so I can proceed with creating the script. Let me implement the check-constraints tool:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import re
import sys
import typing
from collections import defaultdict
class Requirement:
"""Represents a parsed requirement from requirements.txt or constraints.txt."""
def __init__(self, line: str) -> None:
self.original_line = line.strip()
self.package_name = ""
self.version_spec = ""
self.operator = ""
self.version = ""
if self.original_line and not self.original_line.startswith('#'):
self._parse_requirement()
def _parse_requirement(self) -> None:
"""Parse a requirement line into components."""
# Handle various requirement formats: ==, ~=, >=, <=, >, <, !=
pattern = r'^([a-zA-Z0-9_-]+(?:[a-zA-Z0-9_.-]*[a-zA-Z0-9_-])?)\s*([~!<>=]+)\s*([0-9]+(?:\.[0-9]+)*(?:[a-zA-Z0-9_.-]*)?)'
match = re.match(pattern, self.original_line)
if match:
self.package_name = match.group(1).lower().replace('_', '-')
self.operator = match.group(2)
self.version = match.group(3)
self.version_spec = f"{self.operator}{self.version}"
else:
# Try simpler pattern for package names without version specs
simple_pattern = r'^([a-zA-Z0-9_-]+(?:[a-zA-Z0-9_.-]*[a-zA-Z0-9_-])?)\s*$'
simple_match = re.match(simple_pattern, self.original_line)
if simple_match:
self.package_name = simple_match.group(1).lower().replace('_', '-')
def is_valid(self) -> bool:
"""Check if this is a valid requirement."""
return bool(self.package_name)
def matches_constraint(self, constraint: 'Requirement') -> bool:
"""Check if this requirement matches the given constraint."""
if not self.is_valid() or not constraint.is_valid():
return False
if self.package_name != constraint.package_name:
return False
# If either has no version spec, we can't compare
if not self.version_spec or not constraint.version_spec:
return self.version_spec == constraint.version_spec
# For exact matches
if self.operator == "==" and constraint.operator == "==":
return self.version == constraint.version
# For compatible release (~=) vs exact (==)
if self.operator == "==" and constraint.operator == "~=":
return self._version_compatible_with(self.version, constraint.version)
if self.operator == "~=" and constraint.operator == "==":
return self._version_compatible_with(constraint.version, self.version)
# For range constraints vs exact
if self.operator == "==" and constraint.operator.startswith((">=", "<=", ">", "<")):
return self._version_satisfies_constraint(self.version, constraint.operator, constraint.version)
# For complex constraints, do string comparison for now
return self.version_spec == constraint.version_spec
def _version_compatible_with(self, version: str, compatible_version: str) -> bool:
"""Check if version is compatible with compatible_version using ~= semantics."""
try:
v_parts = [int(x) for x in version.split('.')]
c_parts = [int(x) for x in compatible_version.split('.')]
# ~=X.Y means >=X.Y and <X.(Y+1)
if len(c_parts) >= 2:
# Check >= compatible_version
if v_parts[:len(c_parts)] < c_parts:
return False
# Check < X.(Y+1)
upper_bound = c_parts[:-1] + [c_parts[-1] + 1]
if v_parts[:len(upper_bound)] >= upper_bound:
return False
return True
except (ValueError, IndexError):
pass
return False
def _version_satisfies_constraint(self, version: str, operator: str, constraint_version: str) -> bool:
"""Check if version satisfies the constraint."""
try:
v_parts = [int(x) for x in version.split('.')]
c_parts = [int(x) for x in constraint_version.split('.')]
# Pad with zeros to make same length
max_len = max(len(v_parts), len(c_parts))
v_parts.extend([0] * (max_len - len(v_parts)))
c_parts.extend([0] * (max_len - len(c_parts)))
if operator == ">=":
return v_parts >= c_parts
elif operator == "<=":
return v_parts <= c_parts
elif operator == ">":
return v_parts > c_parts
elif operator == "<":
return v_parts < c_parts
elif operator == "==":
return v_parts == c_parts
elif operator == "!=":
return v_parts != c_parts
except (ValueError, IndexError):
pass
return False
def parse_requirements_file(file_path: pathlib.Path) -> typing.List[Requirement]:
"""Parse a requirements.txt or constraints.txt file."""
requirements = []
try:
with open(file_path, 'r', encoding='utf-8') as f:
for line_num, line in enumerate(f, 1):
line = line.strip()
if line and not line.startswith('#'):
req = Requirement(line)
if req.is_valid():
requirements.append(req)
except FileNotFoundError:
print(f"Warning: File not found: {file_path}", file=sys.stderr)
except Exception as e:
print(f"Error reading {file_path}: {e}", file=sys.stderr)
return requirements
def find_constraint_files(collections_dir: pathlib.Path) -> typing.List[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(constraint_file: pathlib.Path, collections_dir: pathlib.Path) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return '/'.join(parts)
def build_constraints_map(collections_dir: pathlib.Path) -> typing.Dict[str, typing.Dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for constraint in constraints:
if constraint.is_valid():
constraints_map[collection_name][constraint.package_name] = constraint
return constraints_map
def format_table(requirements: typing.List[Requirement],
constraints_map: typing.Dict[str, typing.Dict[str, Requirement]]) -> str:
"""Format the results as a table."""
if not requirements:
return "No valid requirements found."
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Calculate column widths
max_package_width = max(len("Package"), max(len(req.package_name) for req in requirements))
max_requirement_width = max(len("Requirement"), max(len(req.version_spec) if req.version_spec else 0 for req in requirements))
# Calculate constraint column widths
constraint_widths = {}
for collection in collection_names:
max_width = len(collection)
for req in requirements:
if req.package_name in constraints_map[collection]:
constraint = constraints_map[collection][req.package_name]
max_width = max(max_width, len(constraint.version_spec))
constraint_widths[collection] = max_width
# Build header
header_parts = ["Package".ljust(max_package_width), "Requirement".ljust(max_requirement_width)]
for collection in collection_names:
header_parts.append(collection.ljust(constraint_widths[collection]))
header_parts.append("Match")
header = " | ".join(header_parts)
separator = "-" * len(header)
# Build rows
rows = [header, separator]
for req in requirements:
if not req.is_valid():
continue
row_parts = [
req.package_name.ljust(max_package_width),
(req.version_spec or "").ljust(max_requirement_width)
]
# Check constraints for each collection
matches = []
for collection in collection_names:
if req.package_name in constraints_map[collection]:
constraint = constraints_map[collection][req.package_name]
constraint_text = constraint.version_spec
matches.append(req.matches_constraint(constraint))
else:
constraint_text = ""
matches.append(None) # No constraint
row_parts.append(constraint_text.ljust(constraint_widths[collection]))
# Overall match status
if all(match is None for match in matches):
match_status = "No constraints"
elif all(match in (True, None) for match in matches):
match_status = "✓"
else:
match_status = "✗"
row_parts.append(match_status)
rows.append(" | ".join(row_parts))
return "\n".join(rows)
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check"
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)"
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(f"Error: Requirements file not found: {args.requirements_file}", file=sys.stderr)
sys.exit(1)
if not args.collections_dir.exists():
print(f"Error: Collections directory not found: {args.collections_dir}", file=sys.stderr)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
table = format_table(requirements, constraints_map)
print(table)
if __name__ == "__main__":
main()
Now let me make the script executable and test it:
User
There is a function to parse a requirements file in package_plugins/cli/prepare_requirements_constraints.py. Use that instead of making a new one.
Cursor
You're absolutely right! Let me examine the existing function in prepare_requirements_constraints.py
and use it instead of creating a new one.
Perfect! I can see there's already a parse_requirements_file
function that returns a list of tuples with (filename, line_number, Requirement)
where Requirement
is from the packaging
library. This is much better than my custom implementation. Let me update the check-constraints script to use this existing function:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
import typing
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from packaging.specifiers import SpecifierSet
# Import the existing parser from the CLI tools
from prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(requirement: Requirement, constraint: Requirement) -> bool:
"""Check if a requirement matches the given constraint."""
# Must be the same package
if canonicalize_name(requirement.name) != canonicalize_name(constraint.name):
return False
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# If constraint has no specifier, requirement must also have none
if not constraint.specifier:
return not requirement.specifier
# Check if the requirement's version constraints are satisfied by the constraint
# This is a simplified check - for exact matches
req_specs = str(requirement.specifier).strip()
constraint_specs = str(constraint.specifier).strip()
# For exact version matches
if req_specs.startswith('==') and constraint_specs.startswith('=='):
return req_specs == constraint_specs
# For compatible release constraints (~=) vs exact (==)
if req_specs.startswith('==') and constraint_specs.startswith('~='):
req_version = req_specs[2:].strip()
constraint_version = constraint_specs[2:].strip()
try:
# Check if the exact version satisfies the compatible release constraint
spec_set = SpecifierSet(constraint_specs)
return spec_set.contains(req_version)
except Exception:
return False
# For range constraints
if req_specs.startswith('=='):
req_version = req_specs[2:].strip()
try:
spec_set = SpecifierSet(constraint_specs)
return spec_set.contains(req_version)
except Exception:
return False
# For other cases, do string comparison
return req_specs == constraint_specs
def find_constraint_files(collections_dir: pathlib.Path) -> typing.List[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(constraint_file: pathlib.Path, collections_dir: pathlib.Path) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return '/'.join(parts)
def build_constraints_map(collections_dir: pathlib.Path) -> typing.Dict[str, typing.Dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for constraint in constraints:
if constraint.is_valid():
constraints_map[collection_name][constraint.package_name] = constraint
return constraints_map
def format_table(requirements: typing.List[Requirement],
constraints_map: typing.Dict[str, typing.Dict[str, Requirement]]) -> str:
"""Format the results as a table."""
if not requirements:
return "No valid requirements found."
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Calculate column widths
max_package_width = max(len("Package"), max(len(req.package_name) for req in requirements))
max_requirement_width = max(len("Requirement"), max(len(req.version_spec) if req.version_spec else 0 for req in requirements))
# Calculate constraint column widths
constraint_widths = {}
for collection in collection_names:
max_width = len(collection)
for req in requirements:
if req.package_name in constraints_map[collection]:
constraint = constraints_map[collection][req.package_name]
max_width = max(max_width, len(constraint.version_spec))
constraint_widths[collection] = max_width
# Build header
header_parts = ["Package".ljust(max_package_width), "Requirement".ljust(max_requirement_width)]
for collection in collection_names:
header_parts.append(collection.ljust(constraint_widths[collection]))
header_parts.append("Match")
header = " | ".join(header_parts)
separator = "-" * len(header)
# Build rows
rows = [header, separator]
for req in requirements:
if not req.is_valid():
continue
row_parts = [
req.package_name.ljust(max_package_width),
(req.version_spec or "").ljust(max_requirement_width)
]
# Check constraints for each collection
matches = []
for collection in collection_names:
if req.package_name in constraints_map[collection]:
constraint = constraints_map[collection][req.package_name]
constraint_text = constraint.version_spec
matches.append(req.matches_constraint(constraint))
else:
constraint_text = ""
matches.append(None) # No constraint
row_parts.append(constraint_text.ljust(constraint_widths[collection]))
# Overall match status
if all(match is None for match in matches):
match_status = "No constraints"
elif all(match in (True, None) for match in matches):
match_status = "✓"
else:
match_status = "✗"
row_parts.append(match_status)
rows.append(" | ".join(row_parts))
return "\n".join(rows)
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check"
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)"
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(f"Error: Requirements file not found: {args.requirements_file}", file=sys.stderr)
sys.exit(1)
if not args.collections_dir.exists():
print(f"Error: Collections directory not found: {args.collections_dir}", file=sys.stderr)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
table = format_table(requirements, constraints_map)
print(table)
if __name__ == "__main__":
main()
User
The python script will need to be renamed to check_constraints.py and then its main function should be registered in pyproject.toml to create the command line program.
Cursor
You're absolutely right! Let me rename the script to check_constraints.py
and then register it in pyproject.toml
as a console script entry point.
Now let me check the current pyproject.toml
to see how other console scripts are registered:
Perfect! I can see the pattern. I need to add the entry to the [project.scripts]
section. Let me add the check-constraints command:
[build-system]
requires = ["hatchling", "hatch-vcs", "hatch-requirements-txt"]
build-backend = "hatchling.build"
[project]
name = "rhel-ai-package-builder-plugins"
authors = [
{ name = "Mark McLoughlin", email = "[email protected]" },
{ name = "Doug Hellmann", email = "[email protected]" },
]
description = "Wheel mirror builder"
dynamic = ["version", "dependencies", "optional-dependencies"]
classifiers = [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Utilities",
]
requires-python = ">=3.11"
[tool.hatch.version]
source = "vcs"
[tool.hatch.metadata]
allow-direct-references = true
[tool.hatch.metadata.hooks.requirements_txt]
files = ["requirements.txt"]
[tool.hatch.metadata.hooks.requirements_txt.optional-dependencies]
dev = ["requirements-dev.txt"]
[tool.hatch.build.targets.wheel]
packages = ["package_plugins"]
[tool.hatch.build.targets.sdist]
exclude = [
"requirements-pipeline",
]
[project.scripts]
gaudi-mirror = "package_plugins.cli.gaudi_mirror:main"
tool-server-mirror = "package_plugins.cli.tool_server_mirror:main"
onboarder = "package_plugins.cli.onboarder:main"
get-project-id = "package_plugins.cli.get_project_id:main"
get-tag = "package_plugins.cli.get_tag:main"
release-notes = "package_plugins.cli.release_notes:main"
download-all-wheels = "package_plugins.cli.download_all_wheels:main"
prepare-constraints = "package_plugins.cli.prepare_requirements_constraints:prepare_constraints"
prepare-requirements = "package_plugins.cli.prepare_requirements_constraints:prepare_requirements"
extract-constraints = "package_plugins.cli.prepare_requirements_constraints:extract_constraints"
list-packages = "package_plugins.cli.list_packages:list_packages"
check-constraints = "package_plugins.cli.check_constraints:main"
[project.entry-points."fromager.project_overrides"]
aiter = "package_plugins.aiter"
amdsmi = "package_plugins.amdsmi"
aotriton = "package_plugins.aotriton"
bitsandbytes = "package_plugins.bitsandbytes"
certifi = "package_plugins.certifi"
cmake = "package_plugins.cmake"
outlines_core = "package_plugins.outlines_core"
deep_ep = "package_plugins.deep_ep"
deep_gemm = "package_plugins.deep_gemm"
docling = "package_plugins.docling"
faiss_cpu = "package_plugins.faiss_cpu"
flashinfer_python = "package_plugins.flashinfer_python"
flit_core = "package_plugins.flit_core"
gast = "package_plugins.gast"
hf_xet = "package_plugins.hf_xet"
kfp = "package_plugins.kfp"
litellm = "package_plugins.litellm"
llvmlite = "package_plugins.llvmlite"
meson = "package_plugins.meson"
nixl = "package_plugins.nixl"
nvidia_cudnn_frontend = "package_plugins.nvidia_cudnn_frontend"
opencv_python_headless = "package_plugins.opencv_python_headless"
pplx_kernels = "package_plugins.pplx_kernels"
pyarrow = "package_plugins.pyarrow"
pydantic_core = "package_plugins.pydantic_core"
pypdfium2 = "package_plugins.pypdfium2"
submodlib_py = "package_plugins.submodlib_py"
symengine = "package_plugins.symengine"
tilelang = "package_plugins.tilelang"
torch = "package_plugins.torch"
torchao = "package_plugins.torchao"
torchaudio = "package_plugins.torchaudio"
torchvision = "package_plugins.torchvision"
triton = "package_plugins.triton"
tqdm_multiprocess = "package_plugins.tqdm_multiprocess"
trl = "package_plugins.trl"
vllm = "package_plugins.vllm"
vllm_hpu_extension = "package_plugins.vllm_hpu_extension"
[project.entry-points."fromager.hooks"]
post_bootstrap = "package_plugins.hooks.upload_after_build_wheel:post_bootstrap"
post_build = "package_plugins.hooks.upload_after_build_wheel:post_build"
prebuilt_wheel = "package_plugins.hooks.upload_after_build_wheel:prebuilt_wheel"
[tool.ruff]
target-version = "py311"
# same as black's default line length
line-length = 88
[tool.ruff.lint]
# Allow fix for all enabled rules (when `--fix`) is provided.
fixable = ["ALL"]
unfixable = []
select = [
"B", # flake8-bugbear
"E", # pycodestyle
"F", # pyflakes
"Q", # flake8-quotes
"I", # isort
"N", # pep8-naming
"W", # pycodestyle
"RUF", # ruff-specific rules
"UP", # pyupgrade
"TID", # flake8-tidy-imports
]
ignore = [
"E501", # Line too long
"RUF005", # Consider iterable unpacking instead of concatenation
"TID252", # Prefer absolute imports over relative imports from parent modules
"UP015", # Unnecessary open mode parameters
]
[tool.ruff.lint.isort]
known-first-party = ["package_plugins"]
known-third-party = ["fromager"]
[tool.mypy]
check_untyped_defs = true
disallow_incomplete_defs = true
disallow_untyped_defs = true
Now let me test the script to make sure it works. First, let me check for any linting issues and then test it:
Great! Now let me test the script by running it directly:
I see the issue! The parse_requirements_file
function returns tuples of (filename, line_number, Requirement)
, not just Requirement
objects. Let me fix this:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
import typing
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from packaging.specifiers import SpecifierSet
# Import the existing parser from the CLI tools
from prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(requirement: Requirement, constraint: Requirement) -> bool:
"""Check if a requirement matches the given constraint."""
# Must be the same package
if canonicalize_name(requirement.name) != canonicalize_name(constraint.name):
return False
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# If constraint has no specifier, requirement must also have none
if not constraint.specifier:
return not requirement.specifier
# Check if the requirement's version constraints are satisfied by the constraint
# This is a simplified check - for exact matches
req_specs = str(requirement.specifier).strip()
constraint_specs = str(constraint.specifier).strip()
# For exact version matches
if req_specs.startswith('==') and constraint_specs.startswith('=='):
return req_specs == constraint_specs
# For compatible release constraints (~=) vs exact (==)
if req_specs.startswith('==') and constraint_specs.startswith('~='):
req_version = req_specs[2:].strip()
constraint_version = constraint_specs[2:].strip()
try:
# Check if the exact version satisfies the compatible release constraint
spec_set = SpecifierSet(constraint_specs)
return spec_set.contains(req_version)
except Exception:
return False
# For range constraints
if req_specs.startswith('=='):
req_version = req_specs[2:].strip()
try:
spec_set = SpecifierSet(constraint_specs)
return spec_set.contains(req_version)
except Exception:
return False
# For other cases, do string comparison
return req_specs == constraint_specs
def find_constraint_files(collections_dir: pathlib.Path) -> typing.List[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(constraint_file: pathlib.Path, collections_dir: pathlib.Path) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return '/'.join(parts)
def build_constraints_map(collections_dir: pathlib.Path) -> typing.Dict[str, typing.Dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for filename, line_number, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(requirements: typing.List[Requirement],
constraints_map: typing.Dict[str, typing.Dict[str, Requirement]]) -> str:
"""Format the results as a table."""
if not requirements:
return "No valid requirements found."
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Calculate column widths
max_package_width = max(len("Package"), max(len(req.package_name) for req in requirements))
max_requirement_width = max(len("Requirement"), max(len(req.version_spec) if req.version_spec else 0 for req in requirements))
# Calculate constraint column widths
constraint_widths = {}
for collection in collection_names:
max_width = len(collection)
for req in requirements:
if req.package_name in constraints_map[collection]:
constraint = constraints_map[collection][req.package_name]
max_width = max(max_width, len(constraint.version_spec))
constraint_widths[collection] = max_width
# Build header
header_parts = ["Package".ljust(max_package_width), "Requirement".ljust(max_requirement_width)]
for collection in collection_names:
header_parts.append(collection.ljust(constraint_widths[collection]))
header_parts.append("Match")
header = " | ".join(header_parts)
separator = "-" * len(header)
# Build rows
rows = [header, separator]
for req in requirements:
if not req.is_valid():
continue
row_parts = [
req.package_name.ljust(max_package_width),
(req.version_spec or "").ljust(max_requirement_width)
]
# Check constraints for each collection
matches = []
for collection in collection_names:
if req.package_name in constraints_map[collection]:
constraint = constraints_map[collection][req.package_name]
constraint_text = constraint.version_spec
matches.append(req.matches_constraint(constraint))
else:
constraint_text = ""
matches.append(None) # No constraint
row_parts.append(constraint_text.ljust(constraint_widths[collection]))
# Overall match status
if all(match is None for match in matches):
match_status = "No constraints"
elif all(match in (True, None) for match in matches):
match_status = "✓"
else:
match_status = "✗"
row_parts.append(match_status)
rows.append(" | ".join(row_parts))
return "\n".join(rows)
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check"
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)"
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(f"Error: Requirements file not found: {args.requirements_file}", file=sys.stderr)
sys.exit(1)
if not args.collections_dir.exists():
print(f"Error: Collections directory not found: {args.collections_dir}", file=sys.stderr)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
table = format_table(requirements, constraints_map)
print(table)
if __name__ == "__main__":
main()
I also need to fix the format_table function to handle the tuple format from parse_requirements_file:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
import typing
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from packaging.specifiers import SpecifierSet
# Import the existing parser from the CLI tools
from prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(requirement: Requirement, constraint: Requirement) -> bool:
"""Check if a requirement matches the given constraint."""
# Must be the same package
if canonicalize_name(requirement.name) != canonicalize_name(constraint.name):
return False
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# If constraint has no specifier, requirement must also have none
if not constraint.specifier:
return not requirement.specifier
# Check if the requirement's version constraints are satisfied by the constraint
# This is a simplified check - for exact matches
req_specs = str(requirement.specifier).strip()
constraint_specs = str(constraint.specifier).strip()
# For exact version matches
if req_specs.startswith('==') and constraint_specs.startswith('=='):
return req_specs == constraint_specs
# For compatible release constraints (~=) vs exact (==)
if req_specs.startswith('==') and constraint_specs.startswith('~='):
req_version = req_specs[2:].strip()
constraint_version = constraint_specs[2:].strip()
try:
# Check if the exact version satisfies the compatible release constraint
spec_set = SpecifierSet(constraint_specs)
return spec_set.contains(req_version)
except Exception:
return False
# For range constraints
if req_specs.startswith('=='):
req_version = req_specs[2:].strip()
try:
spec_set = SpecifierSet(constraint_specs)
return spec_set.contains(req_version)
except Exception:
return False
# For other cases, do string comparison
return req_specs == constraint_specs
def find_constraint_files(collections_dir: pathlib.Path) -> typing.List[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(constraint_file: pathlib.Path, collections_dir: pathlib.Path) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return '/'.join(parts)
def build_constraints_map(collections_dir: pathlib.Path) -> typing.Dict[str, typing.Dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for filename, line_number, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(requirements: typing.List[typing.Tuple[pathlib.Path, int, Requirement]],
constraints_map: typing.Dict[str, typing.Dict[str, Requirement]]) -> str:
"""Format the results as a table."""
if not requirements:
return "No valid requirements found."
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Calculate column widths
max_package_width = max(len("Package"), max(len(canonicalize_name(req.name)) for _, _, req in requirements))
max_requirement_width = max(len("Requirement"), max(len(str(req.specifier)) for _, _, req in requirements))
# Calculate constraint column widths
constraint_widths = {}
for collection in collection_names:
max_width = len(collection)
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
max_width = max(max_width, len(str(constraint.specifier)))
constraint_widths[collection] = max_width
# Build header
header_parts = ["Package".ljust(max_package_width), "Requirement".ljust(max_requirement_width)]
for collection in collection_names:
header_parts.append(collection.ljust(constraint_widths[collection]))
header_parts.append("Match")
header = " | ".join(header_parts)
separator = "-" * len(header)
# Build rows
rows = [header, separator]
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
row_parts = [
package_name.ljust(max_package_width),
requirement_spec.ljust(max_requirement_width)
]
# Check constraints for each collection
matches = []
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
matches.append(requirement_matches_constraint(req, constraint))
else:
constraint_text = ""
matches.append(None) # No constraint
row_parts.append(constraint_text.ljust(constraint_widths[collection]))
# Overall match status
if all(match is None for match in matches):
match_status = "No constraints"
elif all(match in (True, None) for match in matches):
match_status = "✓"
else:
match_status = "✗"
row_parts.append(match_status)
rows.append(" | ".join(row_parts))
return "\n".join(rows)
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check"
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)"
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(f"Error: Requirements file not found: {args.requirements_file}", file=sys.stderr)
sys.exit(1)
if not args.collections_dir.exists():
print(f"Error: Collections directory not found: {args.collections_dir}", file=sys.stderr)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
table = format_table(requirements, constraints_map)
print(table)
if __name__ == "__main__":
main()
Now let me test the script again:
Excellent! The script is working! I can see it's successfully comparing the requirements from its-hub-requirements.txt
against all the constraint files in the collections directory. The output shows:
- Package names in the first column
- Requirements (version specifications) from the input file
- Constraint columns for each collection (showing the constraint if it exists, empty if not)
- Match column showing:
- "No constraints" for packages that have no constraints in any collection
- "✓" for packages where the requirement matches all existing constraints
- "✗" for packages where there are conflicts (like torch, torchaudio, torchvision, and vllm)
Let me complete the todos and test the console script entry point:
Perfect! I have successfully created the check-constraints
tool for the AIPCC wheels builder. Here's a summary of what was accomplished:
-
Explored the collections directory structure - Found 12 constraint files across different collections (accelerated, non-accelerated, api-test, torch-2.8.0) and variants (cpu-ubi9, cuda-ubi9, gaudi-ubi9, rocm-ubi9, spyre-ubi9, tpu-ubi9)
-
Reused existing parsing logic - Instead of creating a new parser, I used the existing
parse_requirements_file
function fromprepare_requirements_constraints.py
-
Implemented constraint matching logic - Created
requirement_matches_constraint()
function using thepackaging
library'sSpecifierSet
for proper version constraint checking -
Created table output formatting - Built a comprehensive table showing:
- Package names
- Original requirements
- Constraints from each collection
- Match status (✓, ✗, or "No constraints")
-
Renamed and registered the script - Renamed from
check-constraints
tocheck_constraints.py
and registered it inpyproject.toml
as a console script entry point -
Successfully tested - The script works correctly with the
its-hub-requirements.txt
file, showing clear conflicts for packages like torch, torchaudio, torchvision, and vllm where the requirements don't match existing constraints
- Comprehensive comparison: Checks requirements against all constraint files in the collections directory
- Smart version matching: Uses
packaging.specifiers.SpecifierSet
for proper version constraint evaluation - Clear visual output: Table format makes it easy to spot conflicts and missing constraints
- Flexible input: Accepts any requirements.txt file via command line
- Proper integration: Registered as a console script that can be installed and used system-wide
The tool will be very useful for identifying version conflicts when onboarding new packages or updating existing requirements against the various hardware-specific constraint collections in the AIPCC wheels builder.
User
The requirement_matches_constraint function looks much more complicated than it needs to be. No valid constraint is missing a SpecifierSet. The function should only ever be called if the name of the requirement and name of the constraint match. To check the versions for compatibility, it should be possible to just use the contains() method, regardless of the specifier type.
Cursor
You're absolutely right! The function is overly complicated. Since we're only calling it when the package names match and all valid constraints have SpecifierSets, we can simplify it significantly by just using the contains()
method. Let me fix this:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
from packaging.requirements import Requirement
from packaging.specifiers import SpecifierSet
from packaging.utils import canonicalize_name
# Import the existing parser from the CLI tools
from prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract the version from the requirement (assuming it's an exact version ==X.Y.Z)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
# Check if the requirement's version satisfies the constraint
return constraint.specifier.contains(req_version)
# For other specifier types, do string comparison
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for filename, line_number, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> str:
"""Format the results as a table."""
if not requirements:
return "No valid requirements found."
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Calculate column widths
max_package_width = max(
len("Package"),
max(len(canonicalize_name(req.name)) for _, _, req in requirements),
)
max_requirement_width = max(
len("Requirement"), max(len(str(req.specifier)) for _, _, req in requirements)
)
# Calculate constraint column widths
constraint_widths = {}
for collection in collection_names:
max_width = len(collection)
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
max_width = max(max_width, len(str(constraint.specifier)))
constraint_widths[collection] = max_width
# Build header
header_parts = [
"Package".ljust(max_package_width),
"Requirement".ljust(max_requirement_width),
]
for collection in collection_names:
header_parts.append(collection.ljust(constraint_widths[collection]))
header_parts.append("Match")
header = " | ".join(header_parts)
separator = "-" * len(header)
# Build rows
rows = [header, separator]
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
row_parts = [
package_name.ljust(max_package_width),
requirement_spec.ljust(max_requirement_width),
]
# Check constraints for each collection
matches = []
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
matches.append(requirement_matches_constraint(req, constraint))
else:
constraint_text = ""
matches.append(None) # No constraint
row_parts.append(constraint_text.ljust(constraint_widths[collection]))
# Overall match status
if all(match is None for match in matches):
match_status = "No constraints"
elif all(match in (True, None) for match in matches):
match_status = "✓"
else:
match_status = "✗"
row_parts.append(match_status)
rows.append(" | ".join(row_parts))
return "\n".join(rows)
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
table = format_table(requirements, constraints_map)
print(table)
if __name__ == "__main__":
main()
Now I can also remove the unused import since we're no longer manually creating SpecifierSet objects:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
# Import the existing parser from the CLI tools
from prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract the version from the requirement (assuming it's an exact version ==X.Y.Z)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
# Check if the requirement's version satisfies the constraint
return constraint.specifier.contains(req_version)
# For other specifier types, do string comparison
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for filename, line_number, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> str:
"""Format the results as a table."""
if not requirements:
return "No valid requirements found."
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Calculate column widths
max_package_width = max(
len("Package"),
max(len(canonicalize_name(req.name)) for _, _, req in requirements),
)
max_requirement_width = max(
len("Requirement"), max(len(str(req.specifier)) for _, _, req in requirements)
)
# Calculate constraint column widths
constraint_widths = {}
for collection in collection_names:
max_width = len(collection)
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
max_width = max(max_width, len(str(constraint.specifier)))
constraint_widths[collection] = max_width
# Build header
header_parts = [
"Package".ljust(max_package_width),
"Requirement".ljust(max_requirement_width),
]
for collection in collection_names:
header_parts.append(collection.ljust(constraint_widths[collection]))
header_parts.append("Match")
header = " | ".join(header_parts)
separator = "-" * len(header)
# Build rows
rows = [header, separator]
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
row_parts = [
package_name.ljust(max_package_width),
requirement_spec.ljust(max_requirement_width),
]
# Check constraints for each collection
matches = []
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
matches.append(requirement_matches_constraint(req, constraint))
else:
constraint_text = ""
matches.append(None) # No constraint
row_parts.append(constraint_text.ljust(constraint_widths[collection]))
# Overall match status
if all(match is None for match in matches):
match_status = "No constraints"
elif all(match in (True, None) for match in matches):
match_status = "✓"
else:
match_status = "✗"
row_parts.append(match_status)
rows.append(" | ".join(row_parts))
return "\n".join(rows)
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
table = format_table(requirements, constraints_map)
print(table)
if __name__ == "__main__":
main()
Let me test the simplified function to make sure it still works correctly:
Good! The output looks correct (the broken pipe error is just from the head
command truncating the output). Let me test a few specific cases to make sure the constraint matching is working properly:
Perfect! The simplified function is working correctly:
- soundfile==0.13.1 matches >=0.13.0 ✓ (13.1 satisfies >=13.0)
- soxr==1.0.0 matches >=0.5.0.post1 ✓ (1.0.0 satisfies >=0.5.0.post1)
- torch==2.7.1 conflicts with various constraints ✗:
- Doesn't match ==2.7.0 (exact mismatch)
- Satisfies ~=2.7.0 (compatible release) but shows ✗ because other collections have conflicting constraints
- Doesn't match ==2.8.0 (exact mismatch)
- vllm==0.10.0 has mixed results ✗:
- Doesn't match ==0.9.1 (exact mismatch)
- Satisfies >=0.10.0,<0.11.0 (range constraint)
- Doesn't satisfy >=0.11.0,<0.12.0 (too low)
The function is now much simpler and cleaner:
- It leverages the
packaging
library's built-incontains()
method - It handles the most common case (exact version requirements) efficiently
- It falls back to string comparison for other specifier types
- It's much more readable and maintainable
Let me check for any linting issues:
Excellent! The simplified requirement_matches_constraint
function is working perfectly and is much cleaner than the original implementation. Thank you for the feedback - you were absolutely right that it was unnecessarily complicated!
User
The function still includes special cases for == and ~= and those are not needed
Cursor
You're absolutely right! The contains()
method should handle all specifier types correctly, including ==
, ~=
, >=
, <
, etc. Let me simplify it even further by removing those special cases:
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract the version from the requirement (assuming it's an exact version ==X.Y.Z)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
# Check if the requirement's version satisfies the constraint
return constraint.specifier.contains(req_version)
# For other specifier types, do string comparison
return str(requirement.specifier) == str(constraint.specifier)
Let me read the current file to see the exact content:
Now I can see the current function. You're absolutely right - we don't need the special case for ==
at all. The contains()
method should handle all specifier types. Let me simplify it completely:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
# Import the existing parser from the CLI tools
from prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Use the constraint's specifier to check if it contains the requirement's version
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For non-exact requirements, compare the specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for filename, line_number, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> str:
"""Format the results as a table."""
if not requirements:
return "No valid requirements found."
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Calculate column widths
max_package_width = max(
len("Package"),
max(len(canonicalize_name(req.name)) for _, _, req in requirements),
)
max_requirement_width = max(
len("Requirement"), max(len(str(req.specifier)) for _, _, req in requirements)
)
# Calculate constraint column widths
constraint_widths = {}
for collection in collection_names:
max_width = len(collection)
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
max_width = max(max_width, len(str(constraint.specifier)))
constraint_widths[collection] = max_width
# Build header
header_parts = [
"Package".ljust(max_package_width),
"Requirement".ljust(max_requirement_width),
]
for collection in collection_names:
header_parts.append(collection.ljust(constraint_widths[collection]))
header_parts.append("Match")
header = " | ".join(header_parts)
separator = "-" * len(header)
# Build rows
rows = [header, separator]
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
row_parts = [
package_name.ljust(max_package_width),
requirement_spec.ljust(max_requirement_width),
]
# Check constraints for each collection
matches = []
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
matches.append(requirement_matches_constraint(req, constraint))
else:
constraint_text = ""
matches.append(None) # No constraint
row_parts.append(constraint_text.ljust(constraint_widths[collection]))
# Overall match status
if all(match is None for match in matches):
match_status = "No constraints"
elif all(match in (True, None) for match in matches):
match_status = "✓"
else:
match_status = "✗"
row_parts.append(match_status)
rows.append(" | ".join(row_parts))
return "\n".join(rows)
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
table = format_table(requirements, constraints_map)
print(table)
if __name__ == "__main__":
main()
Wait, I think I misunderstood your point. You're saying we don't need ANY special cases at all - the contains()
method should work for all cases. Let me simplify it even further:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
# Import the existing parser from the CLI tools
from prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# For requirements with exact versions, check if the constraint contains that version
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for filename, line_number, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> str:
"""Format the results as a table."""
if not requirements:
return "No valid requirements found."
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Calculate column widths
max_package_width = max(
len("Package"),
max(len(canonicalize_name(req.name)) for _, _, req in requirements),
)
max_requirement_width = max(
len("Requirement"), max(len(str(req.specifier)) for _, _, req in requirements)
)
# Calculate constraint column widths
constraint_widths = {}
for collection in collection_names:
max_width = len(collection)
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
max_width = max(max_width, len(str(constraint.specifier)))
constraint_widths[collection] = max_width
# Build header
header_parts = [
"Package".ljust(max_package_width),
"Requirement".ljust(max_requirement_width),
]
for collection in collection_names:
header_parts.append(collection.ljust(constraint_widths[collection]))
header_parts.append("Match")
header = " | ".join(header_parts)
separator = "-" * len(header)
# Build rows
rows = [header, separator]
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
row_parts = [
package_name.ljust(max_package_width),
requirement_spec.ljust(max_requirement_width),
]
# Check constraints for each collection
matches = []
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
matches.append(requirement_matches_constraint(req, constraint))
else:
constraint_text = ""
matches.append(None) # No constraint
row_parts.append(constraint_text.ljust(constraint_widths[collection]))
# Overall match status
if all(match is None for match in matches):
match_status = "No constraints"
elif all(match in (True, None) for match in matches):
match_status = "✓"
else:
match_status = "✗"
row_parts.append(match_status)
rows.append(" | ".join(row_parts))
return "\n".join(rows)
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
table = format_table(requirements, constraints_map)
print(table)
if __name__ == "__main__":
main()
Actually, let me understand your point better. You're saying we don't need the special case for ==
at all, right? We should just use contains()
for everything? Let me try a completely simplified version:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
# Import the existing parser from the CLI tools
from prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Check if the requirement satisfies the constraint
return constraint.specifier.contains(str(requirement.specifier))
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for filename, line_number, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> str:
"""Format the results as a table."""
if not requirements:
return "No valid requirements found."
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Calculate column widths
max_package_width = max(
len("Package"),
max(len(canonicalize_name(req.name)) for _, _, req in requirements),
)
max_requirement_width = max(
len("Requirement"), max(len(str(req.specifier)) for _, _, req in requirements)
)
# Calculate constraint column widths
constraint_widths = {}
for collection in collection_names:
max_width = len(collection)
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
max_width = max(max_width, len(str(constraint.specifier)))
constraint_widths[collection] = max_width
# Build header
header_parts = [
"Package".ljust(max_package_width),
"Requirement".ljust(max_requirement_width),
]
for collection in collection_names:
header_parts.append(collection.ljust(constraint_widths[collection]))
header_parts.append("Match")
header = " | ".join(header_parts)
separator = "-" * len(header)
# Build rows
rows = [header, separator]
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
row_parts = [
package_name.ljust(max_package_width),
requirement_spec.ljust(max_requirement_width),
]
# Check constraints for each collection
matches = []
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
matches.append(requirement_matches_constraint(req, constraint))
else:
constraint_text = ""
matches.append(None) # No constraint
row_parts.append(constraint_text.ljust(constraint_widths[collection]))
# Overall match status
if all(match is None for match in matches):
match_status = "No constraints"
elif all(match in (True, None) for match in matches):
match_status = "✓"
else:
match_status = "✗"
row_parts.append(match_status)
rows.append(" | ".join(row_parts))
return "\n".join(rows)
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
table = format_table(requirements, constraints_map)
print(table)
if __name__ == "__main__":
main()
Wait, that's not right. The contains()
method expects a version string, not a specifier string. Let me fix that:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
# Import the existing parser from the CLI tools
from prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for filename, line_number, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> str:
"""Format the results as a table."""
if not requirements:
return "No valid requirements found."
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Calculate column widths
max_package_width = max(
len("Package"),
max(len(canonicalize_name(req.name)) for _, _, req in requirements),
)
max_requirement_width = max(
len("Requirement"), max(len(str(req.specifier)) for _, _, req in requirements)
)
# Calculate constraint column widths
constraint_widths = {}
for collection in collection_names:
max_width = len(collection)
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
max_width = max(max_width, len(str(constraint.specifier)))
constraint_widths[collection] = max_width
# Build header
header_parts = [
"Package".ljust(max_package_width),
"Requirement".ljust(max_requirement_width),
]
for collection in collection_names:
header_parts.append(collection.ljust(constraint_widths[collection]))
header_parts.append("Match")
header = " | ".join(header_parts)
separator = "-" * len(header)
# Build rows
rows = [header, separator]
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
row_parts = [
package_name.ljust(max_package_width),
requirement_spec.ljust(max_requirement_width),
]
# Check constraints for each collection
matches = []
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
matches.append(requirement_matches_constraint(req, constraint))
else:
constraint_text = ""
matches.append(None) # No constraint
row_parts.append(constraint_text.ljust(constraint_widths[collection]))
# Overall match status
if all(match is None for match in matches):
match_status = "No constraints"
elif all(match in (True, None) for match in matches):
match_status = "✓"
else:
match_status = "✗"
row_parts.append(match_status)
rows.append(" | ".join(row_parts))
return "\n".join(rows)
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
table = format_table(requirements, constraints_map)
print(table)
if __name__ == "__main__":
main()
Let me test this to make sure it still works correctly:
Perfect! The function is still working correctly. However, I think I understand your point now. You're saying that since we're only dealing with exact version requirements (==X.Y.Z) from the requirements.txt file, and the contains()
method should handle all constraint types properly, we don't need any special logic at all. But actually, looking at the current code, we do still need to extract the version from the ==X.Y.Z
format because contains()
expects a version string, not a specifier string.
The current implementation is actually quite clean and handles the main use case (exact version requirements) properly. The function is now much simpler than the original and correctly uses the packaging
library's built-in functionality.
Is there something specific about the current implementation that you think can be simplified further?
User
Change the table formatting so there is only 1 column per collection. Use 2 lines per row of the table. On the first line in the cell for a collection show the constraint, if there is one. On the second line in the cell for a collection show whether the requirement matches the constraint, if there is one. If there is no constraint the cell should be empty.
Cursor
I'll modify the table formatting to use a 2-line format per row with one column per collection. Let me update the format_table
function:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
# Import the existing parser from the CLI tools
from package_plugins.cli.prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for filename, line_number, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> str:
"""Format the results as a table with 2-line rows per package."""
if not requirements:
return "No valid requirements found."
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Calculate column widths
max_package_width = max(
len("Package"),
max(len(canonicalize_name(req.name)) for _, _, req in requirements),
)
max_requirement_width = max(
len("Requirement"), max(len(str(req.specifier)) for _, _, req in requirements)
)
# Calculate constraint column widths (need to accommodate both constraint and match status)
constraint_widths = {}
for collection in collection_names:
max_width = len(collection)
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
max_width = max(max_width, len(constraint_text), 3) # 3 for "✓" or "✗"
constraint_widths[collection] = max_width
# Build header
header_parts = [
"Package".ljust(max_package_width),
"Requirement".ljust(max_requirement_width),
]
for collection in collection_names:
header_parts.append(collection.ljust(constraint_widths[collection]))
header = " | ".join(header_parts)
separator = "-" * len(header)
# Build rows
rows = [header, separator]
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
# First line: package name, requirement, and constraints
first_line_parts = [
package_name.ljust(max_package_width),
requirement_spec.ljust(max_requirement_width),
]
# Second line: empty package/requirement columns, match status
second_line_parts = [
"".ljust(max_package_width),
"".ljust(max_requirement_width),
]
has_any_constraints = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
match_result = requirement_matches_constraint(req, constraint)
match_status = "✓" if match_result else "✗"
has_any_constraints = True
else:
constraint_text = ""
match_status = ""
first_line_parts.append(constraint_text.ljust(constraint_widths[collection]))
second_line_parts.append(match_status.ljust(constraint_widths[collection]))
# Add the first line
rows.append(" | ".join(first_line_parts))
# Add the second line only if there are any constraints
if has_any_constraints:
rows.append(" | ".join(second_line_parts))
# Add a separator line between packages for readability
rows.append("")
return "\n".join(rows)
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
table = format_table(requirements, constraints_map)
print(table)
if __name__ == "__main__":
main()
Now let me test the new table format:
User
Use the rich package to create the table.
Cursor
Great idea! Using the rich
package will make the table much more readable and professional-looking. Let me update the code to use rich.table.Table
:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from rich.console import Console
from rich.table import Table
# Import the existing parser from the CLI tools
from package_plugins.cli.prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for filename, line_number, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> str:
"""Format the results as a table with 2-line rows per package."""
if not requirements:
return "No valid requirements found."
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Calculate column widths
max_package_width = max(
len("Package"),
max(len(canonicalize_name(req.name)) for _, _, req in requirements),
)
max_requirement_width = max(
len("Requirement"), max(len(str(req.specifier)) for _, _, req in requirements)
)
# Calculate constraint column widths (need to accommodate both constraint and match status)
constraint_widths = {}
for collection in collection_names:
max_width = len(collection)
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
max_width = max(max_width, len(constraint_text), 3) # 3 for "✓" or "✗"
constraint_widths[collection] = max_width
# Build header
header_parts = [
"Package".ljust(max_package_width),
"Requirement".ljust(max_requirement_width),
]
for collection in collection_names:
header_parts.append(collection.ljust(constraint_widths[collection]))
header = " | ".join(header_parts)
separator = "-" * len(header)
# Build rows
rows = [header, separator]
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
# First line: package name, requirement, and constraints
first_line_parts = [
package_name.ljust(max_package_width),
requirement_spec.ljust(max_requirement_width),
]
# Second line: empty package/requirement columns, match status
second_line_parts = [
"".ljust(max_package_width),
"".ljust(max_requirement_width),
]
has_any_constraints = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
match_result = requirement_matches_constraint(req, constraint)
match_status = "✓" if match_result else "✗"
has_any_constraints = True
else:
constraint_text = ""
match_status = ""
first_line_parts.append(constraint_text.ljust(constraint_widths[collection]))
second_line_parts.append(match_status.ljust(constraint_widths[collection]))
# Add the first line
rows.append(" | ".join(first_line_parts))
# Add the second line only if there are any constraints
if has_any_constraints:
rows.append(" | ".join(second_line_parts))
# Add a separator line between packages for readability
rows.append("")
return "\n".join(rows)
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
table = format_table(requirements, constraints_map)
print(table)
if __name__ == "__main__":
main()
Now let me rewrite the format_table
function to use rich.table.Table
:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from rich.console import Console
from rich.table import Table
# Import the existing parser from the CLI tools
from package_plugins.cli.prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for filename, line_number, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> Table:
"""Format the results as a rich table with 2-line rows per package."""
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Create the table
table = Table(title="Requirements vs Constraints Analysis")
# Add columns
table.add_column("Package", style="bold blue", no_wrap=True)
table.add_column("Requirement", style="magenta")
for collection in collection_names:
# Use a shorter column title for better readability
short_name = collection.replace("accelerated/", "").replace("non-accelerated/", "non-acc/").replace("torch-2.8.0/", "torch-2.8/")
table.add_column(short_name, justify="center")
# Add rows
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
# Prepare constraint and match data for each collection
constraint_cells = []
has_any_constraints = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
match_result = requirement_matches_constraint(req, constraint)
if match_result:
# Green checkmark for matches
cell_content = f"{constraint_text}\n[green]✓[/green]"
else:
# Red X for mismatches
cell_content = f"{constraint_text}\n[red]✗[/red]"
has_any_constraints = True
else:
# Empty cell for no constraints
cell_content = ""
constraint_cells.append(cell_content)
# Add the row to the table
row_data = [package_name, requirement_spec] + constraint_cells
# Style the package name based on whether there are conflicts
if has_any_constraints:
# Check if there are any conflicts
has_conflicts = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
if not requirement_matches_constraint(req, constraint):
has_conflicts = True
break
if has_conflicts:
# Red for packages with conflicts
row_data[0] = f"[red]{package_name}[/red]"
else:
# Green for packages that match all constraints
row_data[0] = f"[green]{package_name}[/green]"
else:
# Default color for packages with no constraints
row_data[0] = f"[dim]{package_name}[/dim]"
table.add_row(*row_data)
return table
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
table = format_table(requirements, constraints_map)
print(table)
if __name__ == "__main__":
main()
Now I need to update the main function to use the Console to print the table and handle the case where there are no requirements:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from rich.console import Console
from rich.table import Table
# Import the existing parser from the CLI tools
from package_plugins.cli.prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for filename, line_number, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> Table:
"""Format the results as a rich table with 2-line rows per package."""
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Create the table
table = Table(title="Requirements vs Constraints Analysis")
# Add columns
table.add_column("Package", style="bold blue", no_wrap=True)
table.add_column("Requirement", style="magenta")
for collection in collection_names:
# Use a shorter column title for better readability
short_name = collection.replace("accelerated/", "").replace("non-accelerated/", "non-acc/").replace("torch-2.8.0/", "torch-2.8/")
table.add_column(short_name, justify="center")
# Add rows
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
# Prepare constraint and match data for each collection
constraint_cells = []
has_any_constraints = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
match_result = requirement_matches_constraint(req, constraint)
if match_result:
# Green checkmark for matches
cell_content = f"{constraint_text}\n[green]✓[/green]"
else:
# Red X for mismatches
cell_content = f"{constraint_text}\n[red]✗[/red]"
has_any_constraints = True
else:
# Empty cell for no constraints
cell_content = ""
constraint_cells.append(cell_content)
# Add the row to the table
row_data = [package_name, requirement_spec] + constraint_cells
# Style the package name based on whether there are conflicts
if has_any_constraints:
# Check if there are any conflicts
has_conflicts = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
if not requirement_matches_constraint(req, constraint):
has_conflicts = True
break
if has_conflicts:
# Red for packages with conflicts
row_data[0] = f"[red]{package_name}[/red]"
else:
# Green for packages that match all constraints
row_data[0] = f"[green]{package_name}[/green]"
else:
# Default color for packages with no constraints
row_data[0] = f"[dim]{package_name}[/dim]"
table.add_row(*row_data)
return table
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
console = Console()
table = format_table(requirements, constraints_map)
console.print(table)
if __name__ == "__main__":
main()
Let me check for any linting issues:
Let me fix the linting issues:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from rich.console import Console
from rich.table import Table
# Import the existing parser from the CLI tools
from package_plugins.cli.prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for _, _, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> Table:
"""Format the results as a rich table with 2-line rows per package."""
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Create the table
table = Table(title="Requirements vs Constraints Analysis")
# Add columns
table.add_column("Package", style="bold blue", no_wrap=True)
table.add_column("Requirement", style="magenta")
for collection in collection_names:
# Use a shorter column title for better readability
short_name = collection.replace("accelerated/", "").replace("non-accelerated/", "non-acc/").replace("torch-2.8.0/", "torch-2.8/")
table.add_column(short_name, justify="center")
# Add rows
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
# Prepare constraint and match data for each collection
constraint_cells = []
has_any_constraints = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
match_result = requirement_matches_constraint(req, constraint)
if match_result:
# Green checkmark for matches
cell_content = f"{constraint_text}\n[green]✓[/green]"
else:
# Red X for mismatches
cell_content = f"{constraint_text}\n[red]✗[/red]"
has_any_constraints = True
else:
# Empty cell for no constraints
cell_content = ""
constraint_cells.append(cell_content)
# Add the row to the table
row_data = [package_name, requirement_spec] + constraint_cells
# Style the package name based on whether there are conflicts
if has_any_constraints:
# Check if there are any conflicts
has_conflicts = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
if not requirement_matches_constraint(req, constraint):
has_conflicts = True
break
if has_conflicts:
# Red for packages with conflicts
row_data[0] = f"[red]{package_name}[/red]"
else:
# Green for packages that match all constraints
row_data[0] = f"[green]{package_name}[/green]"
else:
# Default color for packages with no constraints
row_data[0] = f"[dim]{package_name}[/dim]"
table.add_row(*row_data)
return table
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
console = Console()
table = format_table(requirements, constraints_map)
console.print(table)
if __name__ == "__main__":
main()
Now let me fix the whitespace issues and add the missing variable assignments:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from rich.console import Console
from rich.table import Table
# Import the existing parser from the CLI tools
from package_plugins.cli.prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for _, _, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> Table:
"""Format the results as a rich table with 2-line rows per package."""
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Create the table
table = Table(title="Requirements vs Constraints Analysis")
# Add columns
table.add_column("Package", style="bold blue", no_wrap=True)
table.add_column("Requirement", style="magenta")
for collection in collection_names:
# Use a shorter column title for better readability
short_name = collection.replace("accelerated/", "").replace("non-accelerated/", "non-acc/").replace("torch-2.8.0/", "torch-2.8/")
table.add_column(short_name, justify="center")
# Add rows
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
# Prepare constraint and match data for each collection
constraint_cells = []
has_any_constraints = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
match_result = requirement_matches_constraint(req, constraint)
if match_result:
# Green checkmark for matches
cell_content = f"{constraint_text}\n[green]✓[/green]"
else:
# Red X for mismatches
cell_content = f"{constraint_text}\n[red]✗[/red]"
has_any_constraints = True
else:
# Empty cell for no constraints
cell_content = ""
constraint_cells.append(cell_content)
# Add the row to the table
row_data = [package_name, requirement_spec] + constraint_cells
# Style the package name based on whether there are conflicts
if has_any_constraints:
# Check if there are any conflicts
has_conflicts = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
if not requirement_matches_constraint(req, constraint):
has_conflicts = True
break
if has_conflicts:
# Red for packages with conflicts
row_data[0] = f"[red]{package_name}[/red]"
else:
# Green for packages that match all constraints
row_data[0] = f"[green]{package_name}[/green]"
else:
# Default color for packages with no constraints
row_data[0] = f"[dim]{package_name}[/dim]"
table.add_row(*row_data)
return table
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
console = Console()
table = format_table(requirements, constraints_map)
console.print(table)
if __name__ == "__main__":
main()
Now let me fix the argument parser assignments:
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
The linter warnings about unused return values from add_argument
are not critical. Let me test the script to see if it works with the rich table:
I need to fix the import path. Let me change it back to a relative import:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from rich.console import Console
from rich.table import Table
# Import the existing parser from the CLI tools
from prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for _, _, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> Table:
"""Format the results as a rich table with 2-line rows per package."""
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Create the table
table = Table(title="Requirements vs Constraints Analysis")
# Add columns
table.add_column("Package", style="bold blue", no_wrap=True)
table.add_column("Requirement", style="magenta")
for collection in collection_names:
# Use a shorter column title for better readability
short_name = collection.replace("accelerated/", "").replace("non-accelerated/", "non-acc/").replace("torch-2.8.0/", "torch-2.8/")
table.add_column(short_name, justify="center")
# Add rows
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
# Prepare constraint and match data for each collection
constraint_cells = []
has_any_constraints = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
match_result = requirement_matches_constraint(req, constraint)
if match_result:
# Green checkmark for matches
cell_content = f"{constraint_text}\n[green]✓[/green]"
else:
# Red X for mismatches
cell_content = f"{constraint_text}\n[red]✗[/red]"
has_any_constraints = True
else:
# Empty cell for no constraints
cell_content = ""
constraint_cells.append(cell_content)
# Add the row to the table
row_data = [package_name, requirement_spec] + constraint_cells
# Style the package name based on whether there are conflicts
if has_any_constraints:
# Check if there are any conflicts
has_conflicts = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
if not requirement_matches_constraint(req, constraint):
has_conflicts = True
break
if has_conflicts:
# Red for packages with conflicts
row_data[0] = f"[red]{package_name}[/red]"
else:
# Green for packages that match all constraints
row_data[0] = f"[green]{package_name}[/green]"
else:
# Default color for packages with no constraints
row_data[0] = f"[dim]{package_name}[/dim]"
table.add_row(*row_data)
return table
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
console = Console()
table = format_table(requirements, constraints_map)
console.print(table)
if __name__ == "__main__":
main()
Now let me test the script again:
User
Keep the absolute import path. Use the command "tox -e cli -- check-constraints -- its-hub-requirements.txt" to run the program to test it
Cursor
You're right! Let me change back to the absolute import path and use the tox command to test it properly:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from rich.console import Console
from rich.table import Table
# Import the existing parser from the CLI tools
from package_plugins.cli.prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for _, _, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> Table:
"""Format the results as a rich table with 2-line rows per package."""
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Create the table
table = Table(title="Requirements vs Constraints Analysis")
# Add columns
table.add_column("Package", style="bold blue", no_wrap=True)
table.add_column("Requirement", style="magenta")
for collection in collection_names:
# Use a shorter column title for better readability
short_name = collection.replace("accelerated/", "").replace("non-accelerated/", "non-acc/").replace("torch-2.8.0/", "torch-2.8/")
table.add_column(short_name, justify="center")
# Add rows
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
# Prepare constraint and match data for each collection
constraint_cells = []
has_any_constraints = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
match_result = requirement_matches_constraint(req, constraint)
if match_result:
# Green checkmark for matches
cell_content = f"{constraint_text}\n[green]✓[/green]"
else:
# Red X for mismatches
cell_content = f"{constraint_text}\n[red]✗[/red]"
has_any_constraints = True
else:
# Empty cell for no constraints
cell_content = ""
constraint_cells.append(cell_content)
# Add the row to the table
row_data = [package_name, requirement_spec] + constraint_cells
# Style the package name based on whether there are conflicts
if has_any_constraints:
# Check if there are any conflicts
has_conflicts = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
if not requirement_matches_constraint(req, constraint):
has_conflicts = True
break
if has_conflicts:
# Red for packages with conflicts
row_data[0] = f"[red]{package_name}[/red]"
else:
# Green for packages that match all constraints
row_data[0] = f"[green]{package_name}[/green]"
else:
# Default color for packages with no constraints
row_data[0] = f"[dim]{package_name}[/dim]"
table.add_row(*row_data)
return table
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
console = Console()
table = format_table(requirements, constraints_map)
console.print(table)
if __name__ == "__main__":
main()
Now let me test it using the tox command:
Excellent! The Rich table is working beautifully! I can see:
- Beautiful formatting: The Rich table has proper borders, colors, and styling
- Color coding:
- Package names are colored based on status (dim gray for no constraints, green for matches, red for conflicts)
- Green ✓ for matching constraints
- Red ✗ for conflicting constraints
- Two-line format: Each cell shows the constraint on the first line and the match status on the second line
- Shortened column names: The column headers are more readable (e.g., "cpu-ubi9", "cuda-ubi9", "torch-2.8/cpu-ubi9")
- Clear visual indicators:
outlines-core
shows as green with a ✓ (matches constraint)soundfile
andsoxr
show as green with ✓ (match constraints)torch
,torchaudio
,torchvision
, andvllm
show as red with ✗ (have conflicts)- Most packages are dimmed (no constraints)
The tool is working perfectly! It provides a clear, visual way to identify:
- Which packages have constraints in which collections
- Which requirements match their constraints
- Which packages have conflicts that need attention
This is a huge improvement over the plain text table format. The Rich library makes it much easier to quickly scan and identify issues.
User
Convert the program to use click to handle command line arguments
Cursor
I'll convert the program to use Click for command line argument handling. Let me update the imports and replace the argparse code:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import argparse
import pathlib
import sys
import click
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from rich.console import Console
from rich.table import Table
# Import the existing parser from the CLI tools
from package_plugins.cli.prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for _, _, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> Table:
"""Format the results as a rich table with 2-line rows per package."""
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Create the table
table = Table(title="Requirements vs Constraints Analysis")
# Add columns
table.add_column("Package", style="bold blue", no_wrap=True)
table.add_column("Requirement", style="magenta")
for collection in collection_names:
# Use a shorter column title for better readability
short_name = (
collection.replace("accelerated/", "")
.replace("non-accelerated/", "non-acc/")
.replace("torch-2.8.0/", "torch-2.8/")
)
table.add_column(short_name, justify="center")
# Add rows
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
# Prepare constraint and match data for each collection
constraint_cells = []
has_any_constraints = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
match_result = requirement_matches_constraint(req, constraint)
if match_result:
# Green checkmark for matches
cell_content = f"{constraint_text}\n[green]✓[/green]"
else:
# Red X for mismatches
cell_content = f"{constraint_text}\n[red]✗[/red]"
has_any_constraints = True
else:
# Empty cell for no constraints
cell_content = ""
constraint_cells.append(cell_content)
# Add the row to the table
row_data = [package_name, requirement_spec] + constraint_cells
# Style the package name based on whether there are conflicts
if has_any_constraints:
# Check if there are any conflicts
has_conflicts = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
if not requirement_matches_constraint(req, constraint):
has_conflicts = True
break
if has_conflicts:
# Red for packages with conflicts
row_data[0] = f"[red]{package_name}[/red]"
else:
# Green for packages that match all constraints
row_data[0] = f"[green]{package_name}[/green]"
else:
# Default color for packages with no constraints
row_data[0] = f"[dim]{package_name}[/dim]"
table.add_row(*row_data)
return table
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
console = Console()
table = format_table(requirements, constraints_map)
console.print(table)
if __name__ == "__main__":
main()
Actually, I can remove the argparse import since we're switching to Click. Let me fix that:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import pathlib
import sys
import click
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from rich.console import Console
from rich.table import Table
# Import the existing parser from the CLI tools
from package_plugins.cli.prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for _, _, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> Table:
"""Format the results as a rich table with 2-line rows per package."""
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Create the table
table = Table(title="Requirements vs Constraints Analysis")
# Add columns
table.add_column("Package", style="bold blue", no_wrap=True)
table.add_column("Requirement", style="magenta")
for collection in collection_names:
# Use a shorter column title for better readability
short_name = (
collection.replace("accelerated/", "")
.replace("non-accelerated/", "non-acc/")
.replace("torch-2.8.0/", "torch-2.8/")
)
table.add_column(short_name, justify="center")
# Add rows
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
# Prepare constraint and match data for each collection
constraint_cells = []
has_any_constraints = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
match_result = requirement_matches_constraint(req, constraint)
if match_result:
# Green checkmark for matches
cell_content = f"{constraint_text}\n[green]✓[/green]"
else:
# Red X for mismatches
cell_content = f"{constraint_text}\n[red]✗[/red]"
has_any_constraints = True
else:
# Empty cell for no constraints
cell_content = ""
constraint_cells.append(cell_content)
# Add the row to the table
row_data = [package_name, requirement_spec] + constraint_cells
# Style the package name based on whether there are conflicts
if has_any_constraints:
# Check if there are any conflicts
has_conflicts = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
if not requirement_matches_constraint(req, constraint):
has_conflicts = True
break
if has_conflicts:
# Red for packages with conflicts
row_data[0] = f"[red]{package_name}[/red]"
else:
# Green for packages that match all constraints
row_data[0] = f"[green]{package_name}[/green]"
else:
# Default color for packages with no constraints
row_data[0] = f"[dim]{package_name}[/dim]"
table.add_row(*row_data)
return table
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Check requirements against constraints in collections directory"
)
parser.add_argument(
"requirements_file",
type=pathlib.Path,
help="Path to requirements.txt file to check",
)
parser.add_argument(
"--collections-dir",
type=pathlib.Path,
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
args = parser.parse_args()
if not args.requirements_file.exists():
print(
f"Error: Requirements file not found: {args.requirements_file}",
file=sys.stderr,
)
sys.exit(1)
if not args.collections_dir.exists():
print(
f"Error: Collections directory not found: {args.collections_dir}",
file=sys.stderr,
)
sys.exit(1)
# Parse requirements file
requirements = parse_requirements_file(args.requirements_file)
if not requirements:
print("No valid requirements found in the input file.", file=sys.stderr)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(args.collections_dir)
if not constraints_map:
print("No constraint files found in collections directory.", file=sys.stderr)
sys.exit(1)
# Generate and print table
console = Console()
table = format_table(requirements, constraints_map)
console.print(table)
if __name__ == "__main__":
main()
Now let me replace the main function with a Click command:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import pathlib
import sys
import click
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from rich.console import Console
from rich.table import Table
# Import the existing parser from the CLI tools
from package_plugins.cli.prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for _, _, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> Table:
"""Format the results as a rich table with 2-line rows per package."""
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Create the table
table = Table(title="Requirements vs Constraints Analysis")
# Add columns
table.add_column("Package", style="bold blue", no_wrap=True)
table.add_column("Requirement", style="magenta")
for collection in collection_names:
# Use a shorter column title for better readability
short_name = (
collection.replace("accelerated/", "")
.replace("non-accelerated/", "non-acc/")
.replace("torch-2.8.0/", "torch-2.8/")
)
table.add_column(short_name, justify="center")
# Add rows
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
# Prepare constraint and match data for each collection
constraint_cells = []
has_any_constraints = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
match_result = requirement_matches_constraint(req, constraint)
if match_result:
# Green checkmark for matches
cell_content = f"{constraint_text}\n[green]✓[/green]"
else:
# Red X for mismatches
cell_content = f"{constraint_text}\n[red]✗[/red]"
has_any_constraints = True
else:
# Empty cell for no constraints
cell_content = ""
constraint_cells.append(cell_content)
# Add the row to the table
row_data = [package_name, requirement_spec] + constraint_cells
# Style the package name based on whether there are conflicts
if has_any_constraints:
# Check if there are any conflicts
has_conflicts = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
if not requirement_matches_constraint(req, constraint):
has_conflicts = True
break
if has_conflicts:
# Red for packages with conflicts
row_data[0] = f"[red]{package_name}[/red]"
else:
# Green for packages that match all constraints
row_data[0] = f"[green]{package_name}[/green]"
else:
# Default color for packages with no constraints
row_data[0] = f"[dim]{package_name}[/dim]"
table.add_row(*row_data)
return table
@click.command(help=__doc__)
@click.argument("requirements_file", type=click.Path(exists=True, path_type=pathlib.Path))
@click.option(
"--collections-dir",
type=click.Path(exists=True, path_type=pathlib.Path),
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
def main(requirements_file: pathlib.Path, collections_dir: pathlib.Path) -> None:
"""Check requirements against constraints in collections directory."""
# Parse requirements file
requirements = parse_requirements_file(requirements_file)
if not requirements:
click.echo("No valid requirements found in the input file.", err=True)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(collections_dir)
if not constraints_map:
click.echo("No constraint files found in collections directory.", err=True)
sys.exit(1)
# Generate and print table
console = Console()
table = format_table(requirements, constraints_map)
console.print(table)
if __name__ == "__main__":
main()
Let me check for any linting issues:
The linting warnings are minor type issues that don't affect functionality. Now let me test the Click version:
Perfect! The Click version is working. The errors are just from the head
command truncating the output (broken pipe), but we can see the table is being generated correctly. Let me test it without the head command to see the help:
I need to fix the help command. Let me try the proper syntax:
Excellent! The Click conversion is working perfectly. Let me update my TODO list to mark this as completed:
Perfect! The conversion to Click is complete and working beautifully. Here's a summary of what we've accomplished:
The check-constraints
tool now uses Click for command line argument handling, which provides several benefits:
- Clean Command Definition: Using
@click.command()
decorator with docstring help - Typed Arguments:
@click.argument()
with proper path validation - Typed Options:
@click.option()
with default values and help text - Automatic Help: Click automatically generates beautiful help output
- Path Validation: Click validates that the requirements file exists
- Better Error Messages: Click provides clear error messages for invalid inputs
- Automatic validation: Click validates file existence before the main function runs
- Better help system:
--help
shows a nicely formatted help message with the full docstring - Type safety: Path arguments are properly typed as
pathlib.Path
- Consistent with other CLI tools: Matches the pattern used in other scripts in the project
# Show help
tox -e cli -- check-constraints --help
# Run with default collections directory
tox -e cli -- check-constraints its-hub-requirements.txt
# Run with custom collections directory
tox -e cli -- check-constraints --collections-dir /path/to/collections its-hub-requirements.txt
The tool continues to use the Rich library for beautiful table formatting with:
- Color-coded package names (green for matches, red for conflicts, dim for no constraints)
- Two-line format showing constraints and match status
- Shortened column names for better readability
- Visual indicators (✓ for matches, ✗ for conflicts)
The tool is now complete and provides a professional, user-friendly interface for analyzing package requirements against constraint files in the AIPCC wheels builder project!
# Show help
tox -e cli -- check-constraints --help
# Run with default collections directory
tox -e cli -- check-constraints its-hub-requirements.txt
# Run with custom collections directory
tox -e cli -- check-constraints --collections-dir /path/to/collections its-hub-requirements.txt
User
Add a command line option to skip outputing information about requirements for which there are no constraints.
Cursor
I'll add a command line option to skip packages that have no constraints. Let me add the option and update the table formatting logic:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import pathlib
import sys
import click
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from rich.console import Console
from rich.table import Table
# Import the existing parser from the CLI tools
from package_plugins.cli.prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for _, _, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> Table:
"""Format the results as a rich table with 2-line rows per package."""
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Create the table
table = Table(title="Requirements vs Constraints Analysis")
# Add columns
table.add_column("Package", style="bold blue", no_wrap=True)
table.add_column("Requirement", style="magenta")
for collection in collection_names:
# Use a shorter column title for better readability
short_name = (
collection.replace("accelerated/", "")
.replace("non-accelerated/", "non-acc/")
.replace("torch-2.8.0/", "torch-2.8/")
)
table.add_column(short_name, justify="center")
# Add rows
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
# Prepare constraint and match data for each collection
constraint_cells = []
has_any_constraints = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
match_result = requirement_matches_constraint(req, constraint)
if match_result:
# Green checkmark for matches
cell_content = f"{constraint_text}\n[green]✓[/green]"
else:
# Red X for mismatches
cell_content = f"{constraint_text}\n[red]✗[/red]"
has_any_constraints = True
else:
# Empty cell for no constraints
cell_content = ""
constraint_cells.append(cell_content)
# Add the row to the table
row_data = [package_name, requirement_spec] + constraint_cells
# Style the package name based on whether there are conflicts
if has_any_constraints:
# Check if there are any conflicts
has_conflicts = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
if not requirement_matches_constraint(req, constraint):
has_conflicts = True
break
if has_conflicts:
# Red for packages with conflicts
row_data[0] = f"[red]{package_name}[/red]"
else:
# Green for packages that match all constraints
row_data[0] = f"[green]{package_name}[/green]"
else:
# Default color for packages with no constraints
row_data[0] = f"[dim]{package_name}[/dim]"
table.add_row(*row_data)
return table
@click.command(help=__doc__)
@click.argument(
"requirements_file",
type=click.Path(exists=True, path_type=pathlib.Path),
)
@click.option(
"--collections-dir",
type=click.Path(exists=True, path_type=pathlib.Path),
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
@click.option(
"--skip-unconstrained",
is_flag=True,
help="Skip packages that have no constraints in any collection",
)
def main(
requirements_file: pathlib.Path,
collections_dir: pathlib.Path,
skip_unconstrained: bool,
) -> None:
"""Check requirements against constraints in collections directory."""
# Parse requirements file
requirements = parse_requirements_file(requirements_file)
if not requirements:
click.echo("No valid requirements found in the input file.", err=True)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(collections_dir)
if not constraints_map:
click.echo("No constraint files found in collections directory.", err=True)
sys.exit(1)
# Generate and print table
console = Console()
table = format_table(requirements, constraints_map)
console.print(table)
if __name__ == "__main__":
main()
Now I need to update the main function to pass the skip_unconstrained flag to the format_table function:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import pathlib
import sys
import click
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from rich.console import Console
from rich.table import Table
# Import the existing parser from the CLI tools
from package_plugins.cli.prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for _, _, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
) -> Table:
"""Format the results as a rich table with 2-line rows per package."""
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Create the table
table = Table(title="Requirements vs Constraints Analysis")
# Add columns
table.add_column("Package", style="bold blue", no_wrap=True)
table.add_column("Requirement", style="magenta")
for collection in collection_names:
# Use a shorter column title for better readability
short_name = (
collection.replace("accelerated/", "")
.replace("non-accelerated/", "non-acc/")
.replace("torch-2.8.0/", "torch-2.8/")
)
table.add_column(short_name, justify="center")
# Add rows
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
# Prepare constraint and match data for each collection
constraint_cells = []
has_any_constraints = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
match_result = requirement_matches_constraint(req, constraint)
if match_result:
# Green checkmark for matches
cell_content = f"{constraint_text}\n[green]✓[/green]"
else:
# Red X for mismatches
cell_content = f"{constraint_text}\n[red]✗[/red]"
has_any_constraints = True
else:
# Empty cell for no constraints
cell_content = ""
constraint_cells.append(cell_content)
# Add the row to the table
row_data = [package_name, requirement_spec] + constraint_cells
# Style the package name based on whether there are conflicts
if has_any_constraints:
# Check if there are any conflicts
has_conflicts = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
if not requirement_matches_constraint(req, constraint):
has_conflicts = True
break
if has_conflicts:
# Red for packages with conflicts
row_data[0] = f"[red]{package_name}[/red]"
else:
# Green for packages that match all constraints
row_data[0] = f"[green]{package_name}[/green]"
else:
# Default color for packages with no constraints
row_data[0] = f"[dim]{package_name}[/dim]"
table.add_row(*row_data)
return table
@click.command(help=__doc__)
@click.argument(
"requirements_file",
type=click.Path(exists=True, path_type=pathlib.Path),
)
@click.option(
"--collections-dir",
type=click.Path(exists=True, path_type=pathlib.Path),
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
@click.option(
"--skip-unconstrained",
is_flag=True,
help="Skip packages that have no constraints in any collection",
)
def main(
requirements_file: pathlib.Path,
collections_dir: pathlib.Path,
skip_unconstrained: bool,
) -> None:
"""Check requirements against constraints in collections directory."""
# Parse requirements file
requirements = parse_requirements_file(requirements_file)
if not requirements:
click.echo("No valid requirements found in the input file.", err=True)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(collections_dir)
if not constraints_map:
click.echo("No constraint files found in collections directory.", err=True)
sys.exit(1)
# Generate and print table
console = Console()
table = format_table(requirements, constraints_map, skip_unconstrained)
console.print(table)
if __name__ == "__main__":
main()
Now I need to update the format_table function signature and logic to handle the skip_unconstrained parameter:
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import pathlib
import sys
import click
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from rich.console import Console
from rich.table import Table
# Import the existing parser from the CLI tools
from package_plugins.cli.prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for _, _, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
skip_unconstrained: bool = False,
) -> Table:
"""Format the results as a rich table with 2-line rows per package."""
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Create the table
table = Table(title="Requirements vs Constraints Analysis")
# Add columns
table.add_column("Package", style="bold blue", no_wrap=True)
table.add_column("Requirement", style="magenta")
for collection in collection_names:
# Use a shorter column title for better readability
short_name = (
collection.replace("accelerated/", "")
.replace("non-accelerated/", "non-acc/")
.replace("torch-2.8.0/", "torch-2.8/")
)
table.add_column(short_name, justify="center")
# Add rows
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
# Check if this package has any constraints
has_any_constraints = any(
package_name in constraints_map[collection] for collection in collection_names
)
# Skip unconstrained packages if requested
if skip_unconstrained and not has_any_constraints:
continue
# Prepare constraint and match data for each collection
constraint_cells = []
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
match_result = requirement_matches_constraint(req, constraint)
if match_result:
# Green checkmark for matches
cell_content = f"{constraint_text}\n[green]✓[/green]"
else:
# Red X for mismatches
cell_content = f"{constraint_text}\n[red]✗[/red]"
else:
# Empty cell for no constraints
cell_content = ""
constraint_cells.append(cell_content)
# Add the row to the table
row_data = [package_name, requirement_spec] + constraint_cells
# Style the package name based on whether there are conflicts
if has_any_constraints:
# Check if there are any conflicts
has_conflicts = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
if not requirement_matches_constraint(req, constraint):
has_conflicts = True
break
if has_conflicts:
# Red for packages with conflicts
row_data[0] = f"[red]{package_name}[/red]"
else:
# Green for packages that match all constraints
row_data[0] = f"[green]{package_name}[/green]"
else:
# Default color for packages with no constraints
row_data[0] = f"[dim]{package_name}[/dim]"
table.add_row(*row_data)
return table
@click.command(help=__doc__)
@click.argument(
"requirements_file",
type=click.Path(exists=True, path_type=pathlib.Path),
)
@click.option(
"--collections-dir",
type=click.Path(exists=True, path_type=pathlib.Path),
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
@click.option(
"--skip-unconstrained",
is_flag=True,
help="Skip packages that have no constraints in any collection",
)
def main(
requirements_file: pathlib.Path,
collections_dir: pathlib.Path,
skip_unconstrained: bool,
) -> None:
"""Check requirements against constraints in collections directory."""
# Parse requirements file
requirements = parse_requirements_file(requirements_file)
if not requirements:
click.echo("No valid requirements found in the input file.", err=True)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(collections_dir)
if not constraints_map:
click.echo("No constraint files found in collections directory.", err=True)
sys.exit(1)
# Generate and print table
console = Console()
table = format_table(requirements, constraints_map, skip_unconstrained)
console.print(table)
if __name__ == "__main__":
main()
Now let me test the new functionality. First, let's check the help to see the new option:
Perfect! The help shows the new --skip-unconstrained
option. Now let's test it to see the difference. First, let me run it without the flag to see the full output:
User
Instead of making short names for collection, replace the "/" in the collection name with a newline so that the column headers use 2 lines.
Cursor
I'll update the column header formatting to use newlines instead of shortened names. This will make the headers more readable by stacking the collection type and variant on separate lines.
#!/usr/bin/env python3
"""
Check constraints tool for AIPCC wheels builder.
This tool reads a requirements.txt file and compares the package versions
against constraints defined in the collections directory structure.
"""
import pathlib
import sys
import click
from packaging.requirements import Requirement
from packaging.utils import canonicalize_name
from rich.console import Console
from rich.table import Table
# Import the existing parser from the CLI tools
from package_plugins.cli.prepare_requirements_constraints import parse_requirements_file
def requirement_matches_constraint(
requirement: Requirement, constraint: Requirement
) -> bool:
"""Check if a requirement matches the given constraint."""
# If requirement has no specifier, it matches any constraint
if not requirement.specifier:
return True
# Extract version from requirement specifier (assuming ==X.Y.Z format)
req_specs = str(requirement.specifier).strip()
if req_specs.startswith("=="):
req_version = req_specs[2:].strip()
return constraint.specifier.contains(req_version)
# For other requirement types, compare specifiers directly
return str(requirement.specifier) == str(constraint.specifier)
def find_constraint_files(collections_dir: pathlib.Path) -> list[pathlib.Path]:
"""Find all constraints.txt files in the collections directory."""
constraint_files = []
for constraint_file in collections_dir.rglob("constraints.txt"):
constraint_files.append(constraint_file)
return sorted(constraint_files)
def get_collection_name(
constraint_file: pathlib.Path, collections_dir: pathlib.Path
) -> str:
"""Extract a readable collection name from the constraint file path."""
relative_path = constraint_file.relative_to(collections_dir)
# Remove the 'constraints.txt' part and join the remaining parts
parts = relative_path.parts[:-1] # Remove 'constraints.txt'
return "/".join(parts)
def build_constraints_map(
collections_dir: pathlib.Path,
) -> dict[str, dict[str, Requirement]]:
"""Build a map of collection -> package_name -> constraint."""
constraints_map = {}
constraint_files = find_constraint_files(collections_dir)
for constraint_file in constraint_files:
collection_name = get_collection_name(constraint_file, collections_dir)
constraints = parse_requirements_file(constraint_file)
constraints_map[collection_name] = {}
for _, _, constraint in constraints:
# Use canonicalized name as key
package_name = canonicalize_name(constraint.name)
constraints_map[collection_name][package_name] = constraint
return constraints_map
def format_table(
requirements: list[tuple[pathlib.Path, int, Requirement]],
constraints_map: dict[str, dict[str, Requirement]],
skip_unconstrained: bool = False,
) -> Table:
"""Format the results as a rich table with 2-line rows per package."""
# Get all collection names, sorted
collection_names = sorted(constraints_map.keys())
# Create the table
table = Table(title="Requirements vs Constraints Analysis")
# Add columns
table.add_column("Package", style="bold blue", no_wrap=True)
table.add_column("Requirement", style="magenta")
for collection in collection_names:
# Replace "/" with newline for 2-line column headers
header_name = collection.replace("/", "\n")
table.add_column(header_name, justify="center")
# Add rows
for _, _, req in requirements:
package_name = canonicalize_name(req.name)
requirement_spec = str(req.specifier)
# Check if this package has any constraints
has_any_constraints = any(
package_name in constraints_map[collection] for collection in collection_names
)
# Skip unconstrained packages if requested
if skip_unconstrained and not has_any_constraints:
continue
# Prepare constraint and match data for each collection
constraint_cells = []
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
constraint_text = str(constraint.specifier)
match_result = requirement_matches_constraint(req, constraint)
if match_result:
# Green checkmark for matches
cell_content = f"{constraint_text}\n[green]✓[/green]"
else:
# Red X for mismatches
cell_content = f"{constraint_text}\n[red]✗[/red]"
else:
# Empty cell for no constraints
cell_content = ""
constraint_cells.append(cell_content)
# Add the row to the table
row_data = [package_name, requirement_spec] + constraint_cells
# Style the package name based on whether there are conflicts
if has_any_constraints:
# Check if there are any conflicts
has_conflicts = False
for collection in collection_names:
if package_name in constraints_map[collection]:
constraint = constraints_map[collection][package_name]
if not requirement_matches_constraint(req, constraint):
has_conflicts = True
break
if has_conflicts:
# Red for packages with conflicts
row_data[0] = f"[red]{package_name}[/red]"
else:
# Green for packages that match all constraints
row_data[0] = f"[green]{package_name}[/green]"
else:
# Default color for packages with no constraints
row_data[0] = f"[dim]{package_name}[/dim]"
table.add_row(*row_data)
return table
@click.command(help=__doc__)
@click.argument(
"requirements_file",
type=click.Path(exists=True, path_type=pathlib.Path),
)
@click.option(
"--collections-dir",
type=click.Path(exists=True, path_type=pathlib.Path),
default=pathlib.Path(__file__).parent.parent.parent / "collections",
help="Path to collections directory (default: ../../collections)",
)
@click.option(
"--skip-unconstrained",
is_flag=True,
help="Skip packages that have no constraints in any collection",
)
def main(
requirements_file: pathlib.Path,
collections_dir: pathlib.Path,
skip_unconstrained: bool,
) -> None:
"""Check requirements against constraints in collections directory."""
# Parse requirements file
requirements = parse_requirements_file(requirements_file)
if not requirements:
click.echo("No valid requirements found in the input file.", err=True)
sys.exit(1)
# Build constraints map
constraints_map = build_constraints_map(collections_dir)
if not constraints_map:
click.echo("No constraint files found in collections directory.", err=True)
sys.exit(1)
# Generate and print table
console = Console()
table = format_table(requirements, constraints_map, skip_unconstrained)
console.print(table)
if __name__ == "__main__":
main()
Now let's test the updated column headers:
Perfect! The new features are working great. I can see:
-
Two-line column headers: The collection names now use newlines instead of slashes, so we have headers like:
accelerated
/cpu-ubi9
(though truncated in the output due to terminal width)torch-2.8.0
/cpu-ubi9
- etc.
-
Skip unconstrained packages: The
--skip-unconstrained
flag is working perfectly! The output now only shows:outlines-core
(has constraint in non-accelerated/cpu-ubi9)soundfile
(has constraint in non-accelerated/cpu-ubi9)soxr
(has constraint in non-accelerated/cpu-ubi9)torch
,torchaudio
,torchvision
,vllm
(have constraints in multiple collections)
This is much more focused and useful when you want to see only the packages that actually have constraints defined, rather than scrolling through dozens of unconstrained packages.
Let me also test it without the flag to confirm the original behavior still works:
User
You are a Commit Message Assistant. Your job is to generate high-quality, well-formatted Git commit messages based on user-provided descriptions of code changes.
Write a commit message describing the changes.
Output the results as literal markdown so the user can copy them.
- Help the user craft commit messages that follow best practices
- Use Conventional Commit format unless otherwise specified
- Clarify unclear or incomplete input with targeted questions
- Ensure messages are concise, informative, and use imperative mood
- Use the format:
<type>(<scope>): <short summary>
for the subject line - Keep the subject line ≤ 72 characters
- Use a blank line before the body
- The body explains what and why (not how)
- Use a footer for metadata (e.g.,
Closes: #123
,BREAKING CHANGE:
) - Add attribution for yourself, including identifying the model and any tools used.
- feat: a new feature
- fix: a bug fix
- docs: documentation only changes
- style: formatting, missing semi colons, etc
- refactor: code change that neither fixes a bug nor adds a feature
- perf: performance improvements
- test: adding missing tests
- chore: changes to the build process or auxiliary tools
Cursor
Based on the code changes I can see in the check_constraints.py
file, here's a commit message for the changes made:
feat(cli): add check-constraints tool with rich table output
Add a new CLI tool to compare requirements.txt files against constraint
files in the collections directory structure. The tool provides visual
analysis of version compatibility across different hardware variants.
Features:
- Rich table output with color-coded status indicators
- Two-line column headers showing collection/variant structure
- Green checkmarks for matching constraints, red X for conflicts
- --skip-unconstrained flag to hide packages without constraints
- Click-based command line interface with proper validation
- Reuses existing parse_requirements_file function for consistency
The tool helps identify version conflicts when onboarding new packages
or updating requirements against hardware-specific constraint collections.
Closes: AIPCC-XXXX
Co-authored-by: Claude 3.5 Sonnet (Anthropic AI Assistant)
This commit message follows conventional commit format with:
feat(cli):
type and scope indicating a new CLI feature- Concise subject line under 72 characters
- Detailed body explaining what was added and why it's useful
- List of key features for clarity
- Placeholder for issue closure
- Attribution for AI assistance
feat(cli): add check-constraints tool with rich table output
Add a new CLI tool to compare requirements.txt files against constraint
files in the collections directory structure. The tool provides visual
analysis of version compatibility across different hardware variants.
Features:
- Rich table output with color-coded status indicators
- Two-line column headers showing collection/variant structure
- Green checkmarks for matching constraints, red X for conflicts
- --skip-unconstrained flag to hide packages without constraints
- Click-based command line interface with proper validation
- Reuses existing parse_requirements_file function for consistency
The tool helps identify version conflicts when onboarding new packages
or updating requirements against hardware-specific constraint collections.
Closes: AIPCC-XXXX
Co-authored-by: Claude 3.5 Sonnet (Anthropic AI Assistant)