Skip to content

Instantly share code, notes, and snippets.

@xenolinux
Created April 11, 2025 07:05
Show Gist options
  • Save xenolinux/cfb6f09cfa2e59df7923d4dddf1c5e2c to your computer and use it in GitHub Desktop.
Save xenolinux/cfb6f09cfa2e59df7923d4dddf1c5e2c to your computer and use it in GitHub Desktop.

_attributes/common-attributes.adoc = {hcp-capital} release notes :context: hosted-control-planes-release-notes toc::[]

Release notes contain information about new and deprecated features, changes, and known issues.

{hcp-capital} release notes for {product-title} 4.19

With this release, {hcp} for {product-title} 4.19 is available. {hcp-capital} for {product-title} 4.19 supports {mce} version 2.9.

New features and enhancements

Bug fixes

Known issues

  • If the annotation and the ManagedCluster resource name do not match, the {mce} console displays the cluster as Pending import. The cluster cannot be used by the {mce-short}. The same issue happens when there is no annotation and the ManagedCluster name does not match the Infra-ID value of the HostedCluster resource.

  • When you use the {mce} console to add a new node pool to an existing hosted cluster, the same version of {product-title} might appear more than once in the list of options. You can select any instance in the list for the version that you want.

  • When a node pool is scaled down to 0 workers, the list of hosts in the console still shows nodes in a Ready state. You can verify the number of nodes in two ways:

    • In the console, go to the node pool and verify that it has 0 nodes.

    • On the command-line interface, run the following commands:

      • Verify that 0 nodes are in the node pool by running the following command:

        $ oc get nodepool -A
      • Verify that 0 nodes are in the cluster by running the following command:

        $ oc get nodes --kubeconfig
      • Verify that 0 agents are reported as bound to the cluster by running the following command:

        $ oc get agents -A
  • When you create a hosted cluster in an environment that uses the dual-stack network, you might encounter the following DNS-related issues:

    • CrashLoopBackOff state in the service-ca-operator pod: When the pod tries to reach the Kubernetes API server through the hosted control plane, the pod cannot reach the server because the data plane proxy in the kube-system namespace cannot resolve the request. This issue occurs because in the HAProxy setup, the front end uses an IP address and the back end uses a DNS name that the pod cannot resolve.

    • Pods stuck in the ContainerCreating state: This issue occurs because the openshift-service-ca-operator resource cannot generate the metrics-tls secret that the DNS pods need for DNS resolution. As a result, the pods cannot resolve the Kubernetes API server. To resolve these issues, configure the DNS server settings for a dual stack network.

  • On the Agent platform, the {hcp} feature periodically rotates the token that the Agent uses to pull ignition. As a result, if you have an Agent resource that was created some time ago, it might fail to pull ignition. As a workaround, in the Agent specification, delete the secret of the IgnitionEndpointTokenReference property then add or modify any label on the Agent resource. The system re-creates the secret with the new token.

  • If you created a hosted cluster in the same namespace as its managed cluster, detaching the managed hosted cluster deletes everything in the managed cluster namespace including the hosted cluster. The following situations can create a hosted cluster in the same namespace as its managed cluster:

    • You created a hosted cluster on the Agent platform through the {mce} console by using the default hosted cluster cluster namespace.

    • You created a hosted cluster through the command-line interface or API by specifying the hosted cluster namespace to be the same as the hosted cluster name.

  • When you use the console or API to specify an IPv6 address for the spec.services.servicePublishingStrategy.nodePort.address field of a hosted cluster, a full IPv6 address with 8 hextets is required. For example, instead of specifying 2620:52:0:1306::30, you need to specify 2620:52:0:1306:0:0:0:30. === General Availability and Technology Preview features Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. For more information about the scope of support for these features, see Technology Preview Features Support Scope on the Red Hat Customer Portal.

For {ibm-power-title} and {ibm-z-title}, you must run the control plane on machine types based on 64-bit x86 architecture, and node pools on {ibm-power-title} or {ibm-z-title}.

Table 1. {hcp-capital} GA and TP tracker
Feature 4.17 4.18 4.19

{hcp-capital} for {product-title} using non-bare-metal agent machines

Technology Preview

Technology Preview

Technology Preview

{hcp-capital} for an ARM64 {product-title} cluster on {aws-full}

General Availability

General Availability

General Availability

{hcp-capital} for {product-title} on {ibm-power-title}

General Availability

General Availability

General Availability

{hcp-capital} for {product-title} on {ibm-z-title}

General Availability

General Availability

General Availability

{hcp-capital} for {product-title} on {rh-openstack}

Developer Preview

Developer Preview

Developer Preview

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment