Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions bootstrap/bootstrap-pod.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ metadata:
namespace: openshift-cluster-version
labels:
k8s-app: cluster-version-operator
annotations:
include.release.openshift.io/{{ .ClusterProfile }}: "true"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we include an explanation in the commit for why the change was made?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I can update that. This was a result of me actually implementing the manifest filtering completely through the render step. Previously it didn't actually do most filtering which meant that manifests without a cluster profile annotation were applied, now, we need a cluster profile annotation on all files to be rendered, and this one was missing it

spec:
containers:
- name: cluster-version-operator
Expand Down
2 changes: 2 additions & 0 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ require (
k8s.io/utils v0.0.0-20250604170112-4c0f3b243397
)

replace github.com/openshift/library-go => github.com/JoelSpeed/library-go v0.0.0-20251217102247-bcd739ac7ffa
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check the full go.mod file to understand context
cat go.mod

Repository: openshift/cluster-version-operator

Length of output: 4437


🏁 Script executed:

# Look for any git history or PR context about this change
git log --oneline -n 20 -- go.mod

Repository: openshift/cluster-version-operator

Length of output: 142


🏁 Script executed:

# Search for any documentation or comments about library-go dependency
rg -i "library-go" --type md --type txt -A 2 -B 2

Repository: openshift/cluster-version-operator

Length of output: 60


Replace to personal library-go fork must be reverted before merge

The replace github.com/openshift/library-go => github.com/JoelSpeed/library-go v0.0.0-20251217102247-bcd739ac7ffa directive must not land on main. Personal fork replacements create maintenance and security risks. Before merging, revert to the upstream github.com/openshift/library-go version that includes the necessary feature-gate changes (once the upstream PR is merged).

🤖 Prompt for AI Agents
In go.mod at line 36, the temporary replace directive pointing to the personal
fork (replace github.com/openshift/library-go => github.com/JoelSpeed/library-go
v0.0.0-20251217102247-bcd739ac7ffa) must be removed and the module restored to
the upstream github.com/openshift/library-go version; revert that replace line,
update the require to the appropriate upstream version (or leave the original
version constraint) once the upstream PR is merged, then run go mod tidy (and
regenerate go.sum) to ensure dependencies are consistent before merging.


require (
cel.dev/expr v0.24.0 // indirect
github.com/antlr4-go/antlr/v4 v4.13.0 // indirect
Expand Down
4 changes: 2 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
cel.dev/expr v0.24.0 h1:56OvJKSH3hDGL0ml5uSxZmz3/3Pq4tJ+fb1unVLAFcY=
cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
github.com/JoelSpeed/library-go v0.0.0-20251217102247-bcd739ac7ffa h1:XI0l1W9wZLiVI/VV8nDtuX3RN7vzAKH9HnoEnCgJF/s=
github.com/JoelSpeed/library-go v0.0.0-20251217102247-bcd739ac7ffa/go.mod h1:ErDfiIrPHH+menTP/B4LKd0nxFDdvCbTamAc6SWMIh8=
github.com/antlr4-go/antlr/v4 v4.13.0 h1:lxCg3LAv+EUK6t1i0y1V6/SLeUi0eKEKdhQAlS8TVTI=
github.com/antlr4-go/antlr/v4 v4.13.0/go.mod h1:pfChB/xh/Unjila75QW7+VU4TSnWnnk9UTnmpPaOR2g=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
Expand Down Expand Up @@ -90,8 +92,6 @@ github.com/openshift/api v0.0.0-20251127005036-0e3c378fdedc h1:p83VYAk7mlqYZrMaK
github.com/openshift/api v0.0.0-20251127005036-0e3c378fdedc/go.mod h1:d5uzF0YN2nQQFA0jIEWzzOZ+edmo6wzlGLvx5Fhz4uY=
github.com/openshift/client-go v0.0.0-20251201171210-333716c1124a h1:iJYjd+rxyjMa3Sk6Vg55secJ4yMrabr/ulnTiy+vDH0=
github.com/openshift/client-go v0.0.0-20251201171210-333716c1124a/go.mod h1:WD7m8ADeqiAKTHWx/mBoE/1MFMtnt9MYTyBOnf0L3LI=
github.com/openshift/library-go v0.0.0-20251120164824-14a789e09884 h1:6512TMT14gnXQ4vyshzAQGjkctU0PO9G+y0tcBjw6Vk=
github.com/openshift/library-go v0.0.0-20251120164824-14a789e09884/go.mod h1:ErDfiIrPHH+menTP/B4LKd0nxFDdvCbTamAc6SWMIh8=
github.com/openshift/onsi-ginkgo/v2 v2.6.1-0.20241205171354-8006f302fd12 h1:AKx/w1qpS8We43bsRgf8Nll3CGlDHpr/WAXvuedTNZI=
github.com/openshift/onsi-ginkgo/v2 v2.6.1-0.20241205171354-8006f302fd12/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo=
github.com/operator-framework/api v0.17.1 h1:J/6+Xj4IEV8C7hcirqUFwOiZAU3PbnJhWvB0/bB51c4=
Expand Down
3 changes: 2 additions & 1 deletion hack/cluster-version-util/task_graph.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import (
"time"

"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/util/sets"

"github.com/openshift/cluster-version-operator/pkg/payload"
)
Expand All @@ -30,7 +31,7 @@ func newTaskGraphCmd() *cobra.Command {

func runTaskGraphCmd(cmd *cobra.Command, args []string) error {
manifestDir := args[0]
release, err := payload.LoadUpdate(manifestDir, "", "", "", payload.DefaultClusterProfile, nil)
release, err := payload.LoadUpdate(manifestDir, "", "", "", payload.DefaultClusterProfile, nil, sets.Set[string]{})
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't handle feature sets today, does it need to handle feature gates?

Copy link
Contributor

@DavidHurta DavidHurta Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think so. It would be nice, but as you have stated, the file is not being kept up-to-date with new functionalities, so I believe it's okay not to implement it for now.

if err != nil {
return err
}
Expand Down
32 changes: 32 additions & 0 deletions install/0000_90_cluster-version-operator_03_configmaps.yaml
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: This file is not to be merged, this is just for testing purposes

Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: dev-preview-created
namespace: openshift-cluster-version
annotations:
include.release.openshift.io/self-managed-high-availability: "true"
release.openshift.io/feature-gates: "Example,Example2"
data:
example: "this should be created on dev-preview-only as it's annotated with Example and Example2"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: tech-preview-created
namespace: openshift-cluster-version
annotations:
include.release.openshift.io/self-managed-high-availability: "true"
release.openshift.io/feature-gates: "Example,-Example2"
data:
example: "this should be created on tech-preview-only as it's annotated with Example and not Example2"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: default-created
namespace: openshift-cluster-version
annotations:
include.release.openshift.io/self-managed-high-availability: "true"
release.openshift.io/feature-gates: "-Example,-Example2"
data:
example: "this should be created on default-only as it's annotated with negation for Example and Example2"
3 changes: 3 additions & 0 deletions lib/manifest/manifest.go
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ func GetImplicitlyEnabledCapabilities(
currentPayloadManifests []manifest.Manifest,
manifestInclusionConfiguration InclusionConfiguration,
currentImplicitlyEnabled sets.Set[configv1.ClusterVersionCapability],
enabledFeatureGates sets.Set[string],
) sets.Set[configv1.ClusterVersionCapability] {
ret := currentImplicitlyEnabled.Clone()
for _, updateManifest := range updatePayloadManifests {
Expand All @@ -57,6 +58,7 @@ func GetImplicitlyEnabledCapabilities(
manifestInclusionConfiguration.Profile,
manifestInclusionConfiguration.Capabilities,
manifestInclusionConfiguration.Overrides,
enabledFeatureGates,
true,
)
// update manifest is enabled, no need to check
Expand All @@ -74,6 +76,7 @@ func GetImplicitlyEnabledCapabilities(
manifestInclusionConfiguration.Profile,
manifestInclusionConfiguration.Capabilities,
manifestInclusionConfiguration.Overrides,
enabledFeatureGates,
true,
); err != nil {
continue
Expand Down
117 changes: 112 additions & 5 deletions pkg/cvo/cvo.go
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/util/wait"
informerscorev1 "k8s.io/client-go/informers/core/v1"
"k8s.io/client-go/kubernetes"
Expand Down Expand Up @@ -120,6 +121,7 @@ type Operator struct {
cmConfigLister listerscorev1.ConfigMapNamespaceLister
cmConfigManagedLister listerscorev1.ConfigMapNamespaceLister
proxyLister configlistersv1.ProxyLister
featureGateLister configlistersv1.FeatureGateLister
cacheSynced []cache.InformerSynced

// queue tracks applying updates to a cluster.
Expand Down Expand Up @@ -189,6 +191,10 @@ type Operator struct {
// featurechangestopper controller will detect when cluster feature gate config changes and shutdown the CVO.
enabledFeatureGates featuregates.CvoGateChecker

// featureGatesMutex protects access to enabledManifestFeatureGates
featureGatesMutex sync.RWMutex
enabledManifestFeatureGates sets.Set[string]

clusterProfile string
uid types.UID

Expand All @@ -213,6 +219,7 @@ func New(
cmConfigManagedInformer informerscorev1.ConfigMapInformer,
proxyInformer configinformersv1.ProxyInformer,
operatorInformerFactory operatorexternalversions.SharedInformerFactory,
featureGateInformer configinformersv1.FeatureGateInformer,
client clientset.Interface,
kubeClient kubernetes.Interface,
operatorClient operatorclientset.Interface,
Expand Down Expand Up @@ -248,9 +255,9 @@ func New(
kubeClient: kubeClient,
operatorClient: operatorClient,
eventRecorder: eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: namespace}),
queue: workqueue.NewTypedRateLimitingQueueWithConfig[any](workqueue.DefaultTypedControllerRateLimiter[any](), workqueue.TypedRateLimitingQueueConfig[any]{Name: "clusterversion"}),
availableUpdatesQueue: workqueue.NewTypedRateLimitingQueueWithConfig[any](workqueue.DefaultTypedControllerRateLimiter[any](), workqueue.TypedRateLimitingQueueConfig[any]{Name: "availableupdates"}),
upgradeableQueue: workqueue.NewTypedRateLimitingQueueWithConfig[any](workqueue.DefaultTypedControllerRateLimiter[any](), workqueue.TypedRateLimitingQueueConfig[any]{Name: "upgradeable"}),
queue: workqueue.NewTypedRateLimitingQueueWithConfig(workqueue.DefaultTypedControllerRateLimiter[any](), workqueue.TypedRateLimitingQueueConfig[any]{Name: "clusterversion"}),
availableUpdatesQueue: workqueue.NewTypedRateLimitingQueueWithConfig(workqueue.DefaultTypedControllerRateLimiter[any](), workqueue.TypedRateLimitingQueueConfig[any]{Name: "availableupdates"}),
upgradeableQueue: workqueue.NewTypedRateLimitingQueueWithConfig(workqueue.DefaultTypedControllerRateLimiter[any](), workqueue.TypedRateLimitingQueueConfig[any]{Name: "upgradeable"}),

hypershift: hypershift,
exclude: exclude,
Expand All @@ -276,6 +283,9 @@ func New(
if _, err := coInformer.Informer().AddEventHandler(optr.clusterOperatorEventHandler()); err != nil {
return nil, err
}
if _, err := featureGateInformer.Informer().AddEventHandler(optr.featureGateEventHandler()); err != nil {
return nil, err
}

optr.coLister = coInformer.Lister()
optr.cacheSynced = append(optr.cacheSynced, coInformer.Informer().HasSynced)
Expand All @@ -287,6 +297,12 @@ func New(
optr.cmConfigLister = cmConfigInformer.Lister().ConfigMaps(internal.ConfigNamespace)
optr.cmConfigManagedLister = cmConfigManagedInformer.Lister().ConfigMaps(internal.ConfigManagedNamespace)

optr.featureGateLister = featureGateInformer.Lister()
optr.cacheSynced = append(optr.cacheSynced, featureGateInformer.Informer().HasSynced)

// Initialize cluster feature gates
optr.initializeFeatureGates()

// make sure this is initialized after all the listers are initialized
optr.upgradeableChecks = optr.defaultUpgradeableChecks()

Expand Down Expand Up @@ -318,7 +334,7 @@ func (optr *Operator) LoadInitialPayload(ctx context.Context, restConfig *rest.C
}

update, err := payload.LoadUpdate(optr.defaultPayloadDir(), optr.release.Image, optr.exclude, string(optr.requiredFeatureSet),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing that has come to my mind from looking more and more at the code. With the proposed changes, the CVO could now be working with two different FeatureGate generations at a moment. The feature set is received quite differently from the feature gates; however, both are used simultaneously at a concrete moment for evaluation, to include or not to include a manifest. Due to parallelism, graceful shutdowns, the following could maybe happen? Given the Include() logic allows for logic like:

Include only when default and gate is enabled:

annotations:
   feature-gate: Gate

Include only when techpreview and gate is enabled:

annotations:
   features-set: TechPreviewNoUpgrade
   feature-gate: Gate

to state the desire to apply a manifest only when a desired feature set is set AND when a desired feature gate is enabled/disabled.

  1. The CVO may be working with a default feature set.
  2. FeatureGate.spec.featureSet gets set to TechPreviewNoUpgrade
  3. The enabled gates of the FeatureGate are populated.
  4. The CVO gets notified of a change in enabled gates.
  5. The CVO starts applying a payload with Default and new enabled gates, even though this was never an existing feature gate state.
  6. The CVO gets shut down due to its ChangeStopper on feature set changes.
  7. The CVO starts applying the payload with the desired setting; however, manifests may have already been deployed, which were not intended to be deployed, and may not be automatically deleted.

I suppose this also highlights the fact that a feature set is set independently of enabled gates, which are set by a cluster operator. However, what about the time between these two actions? The CVO may be applying manifests with a setting that’s only temporary. Was this considered? I am not sure how a CVO could evaluate that the FeatureGate.status has been updated with the relevant enabled gates based on the existing API.


An additional example may be the current initialization, which tries to get enabled gates. However, due to an error when fetching, the CVO could start applying manifest given a configuration (requiredFeatureSet, noEnabledGates), which may cause a similar issue. Manifests with -Gate will get applied. After that, Gate manifests get applied. What if they do not overlap the same objects, which would get reapplied otherwise.


Everything would be dandy, given that we eventually reach the desired cluster state. However, for example, an object deletion requires a specific manifest annotation for deletion [2]. There seem to be areas for non-deterministic application of manifests, which could theoretically leave some existing objects in the cluster (be that non-important manifests or important manifests introducing potential problems, be that even a deployment with an exploitable vulnerability not intended to be deployed in production environments).

Copy link
Contributor

@DavidHurta DavidHurta Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may also be misunderstanding/not clear on the usage of the annotation, to-be feature gate usage, and relevant constraints 👀

Copy link
Contributor

@DavidHurta DavidHurta Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or should the usage of a feature-set or feature-gate annotation disallow the usage of the other in a given manifest? Throwing some ideas and questions into the air.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so this is quite an interesting thread, with a few things to unpick

Include only when default and gate is enabled:

annotations:
feature-gate: Gate

This means apply when the gate is enabled, it has no bearing on the featureset, so this would apply whether the gate was enable in TP, DP, Default or Custom. This is the intended use case.

Include only when techpreview and gate is enabled:

annotations:
features-set: TechPreviewNoUpgrade
feature-gate: Gate
to state the desire to apply a manifest only when a desired feature set is set AND when a desired feature > gate is enabled/disabled.

This is not how I was intending for anyone to use the feature here, you would use one, or the other. Long term, I don't want anyone beyond the openshift/api repo to use the old featureset distinction. I can't think of a use case presently where featureset is better than featuregate for general features, nor can I think of a case where you would want to use both.

Logically, if the manifest is required as part of the feature in TP, why wouldn't it also be required as part of the feature when it is promoted to default? 🤔

I suppose this also highlights the fact that a feature set is set independently of enabled gates, which are set by a cluster operator. However, what about the time between these two actions? The CVO may be applying manifests with a setting that’s only temporary. Was this considered? I am not sure how a CVO could evaluate that the FeatureGate.status has been updated with the relevant enabled gates based on the existing API.

I wonder if we want to update the feature gate watching so that featureset and featuregate update in the same way, either synchronously, or by both causing a kill on change and having the pod restart? I was trying to avoid pod restarts where possible, are you sure the featureset causes a restart? Or is that just when gates pertinent to CVOs operation change?

Everything would be dandy, given that we eventually reach the desired cluster state. However, for example, an object deletion requires a specific manifest annotation for deletion [2]. There seem to be areas for non-deterministic application of manifests, which could theoretically leave some existing objects in the cluster (be that non-important manifests or important manifests introducing potential problems, be that even a deployment with an exploitable vulnerability not intended to be deployed in production environments).

Changes to the FeatureGate are generally additive. In fact, only in CustomNoUpgrade can a feature gate currently go from On to Off. So in theory, when the featureset changes, any manifest that does get applied before the gates are updated should also be applied once the gates are updated. There might be a small period where not all manifests are being applied, but I don't think there should be a case where some get applied that weren't eventually meant to.

Now, in the future, we plan to use gates to remove features, that's where things might be different. We will have manifests with a gate enabled meaning the feature is on, apply the deployment etc, and then with the gate disabled, apply the tombstone version. Again, there should be no need to use featureset here.

Or should the usage of a feature-set or feature-gate annotation disallow the usage of the other in a given manifest? Throwing some ideas and questions into the air.

I'll do this. We aren't expecting them to be used in tandem, and I cannot currently think of a use case for this, so this makes sense to avoid potential races (though I'm fairly confident we would be alright anyway)

optr.clusterProfile, configv1.KnownClusterVersionCapabilities)
optr.clusterProfile, configv1.KnownClusterVersionCapabilities, optr.getEnabledFeatureGates())

if err != nil {
return nil, fmt.Errorf("the local release contents are invalid - no current version can be determined from disk: %v", err)
Expand Down Expand Up @@ -779,7 +795,7 @@ func (optr *Operator) sync(ctx context.Context, key string) error {
}

// inform the config sync loop about our desired state
status := optr.configSync.Update(ctx, config.Generation, desired, config, state)
status := optr.configSync.Update(ctx, config.Generation, desired, config, state, optr.getEnabledFeatureGates())

// write cluster version status
return optr.syncStatus(ctx, original, config, status, errs)
Expand Down Expand Up @@ -1084,6 +1100,97 @@ func (optr *Operator) HTTPClient() (*http.Client, error) {
}, nil
}

// featureGateEventHandler handles changes to FeatureGate objects and updates the cluster feature gates
func (optr *Operator) featureGateEventHandler() cache.ResourceEventHandler {
workQueueKey := optr.queueKey()
return cache.ResourceEventHandlerFuncs{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do have a DeleteFunc set for the ChangeStopper on some feature gate changes. I am thinking of whether we should also include it here as well. Did you consider the option?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You cannot delete the FeatureGate resource, there is an admission time block on this, so, there should never be a delete event coming through the informer. If there were, I'm not exactly sure what you'd do here anyway, so I felt it was ok to omit the delete handler on this occasion

AddFunc: func(obj interface{}) {
if optr.updateEnabledFeatureGates(obj) {
optr.queue.Add(workQueueKey)
}
},
UpdateFunc: func(old, new interface{}) {
if optr.updateEnabledFeatureGates(new) {
optr.queue.Add(workQueueKey)
}
},
}
}

// initializeFeatureGates initializes the cluster feature gates from the current FeatureGate object
func (optr *Operator) initializeFeatureGates() {
// Try to load initial state from the cluster FeatureGate object
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could take an extra consideration for apierrors.IsNotFound case to differentiate between "oh, everything is working, but the object is not there - we are defaulting - this is intended mechanic" and "something is not working - should we fail or default?".

if optr.featureGateLister != nil {
if featureGate, err := optr.featureGateLister.Get("cluster"); err == nil {
optr.updateEnabledFeatureGates(featureGate)
}
}
Comment on lines +1123 to +1127
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are silently ignoring the error. This will result in the default enabled gates being set. This fact may not logged anywhere. We are logging the fact when no enabled gates are found when extracting.

}

// updateEnabledFeatureGates updates the cluster feature gates based on a FeatureGate object
func (optr *Operator) updateEnabledFeatureGates(obj interface{}) bool {
Comment on lines +1130 to +1131
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we please add a description for the meaning of the return value?

featureGate, ok := obj.(*configv1.FeatureGate)
if !ok {
klog.Warningf("Expected FeatureGate object but got %T", obj)
return false
}

newGates := optr.extractEnabledGates(featureGate)

// Check if gates actually changed to avoid unnecessary work
if !optr.enabledManifestFeatureGates.Equal(newGates) {
optr.featureGatesMutex.Lock()
defer optr.featureGatesMutex.Unlock()
Comment on lines +1140 to +1143
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The enabledManifestFeatureGates value is being read before its mutex is locked.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, I thought I had fixed this one, let me check I pushed everything 🤔


klog.V(2).Infof("Cluster feature gates changed from %v to %v",
sets.List(optr.enabledManifestFeatureGates), sets.List(newGates))

optr.enabledManifestFeatureGates = newGates

return true
}

return false
}

// getEnabledFeatureGates returns a copy of the current cluster feature gates for safe consumption
func (optr *Operator) getEnabledFeatureGates() sets.Set[string] {
optr.featureGatesMutex.RLock()
defer optr.featureGatesMutex.RUnlock()

// Return a copy to prevent external modification
result := sets.Set[string]{}
for gate := range optr.enabledManifestFeatureGates {
result.Insert(gate)
}
return result
}

// extractEnabledGates extracts the list of enabled feature gates for the current cluster version
func (optr *Operator) extractEnabledGates(featureGate *configv1.FeatureGate) sets.Set[string] {
enabledGates := sets.Set[string]{}

// Find the feature gate details for the current cluster version
currentVersion := optr.enabledFeatureGates.DesiredVersion()
for _, details := range featureGate.Status.FeatureGates {
if details.Version == currentVersion {
for _, enabled := range details.Enabled {
enabledGates.Insert(string(enabled.Name))
}
klog.V(4).Infof("Found %d enabled feature gates for version %s: %v",
enabledGates.Len(), currentVersion, sets.List(enabledGates))
break
}
}

// If no matching version found, log a warning but continue with empty set
if enabledGates.Len() == 0 {
klog.V(2).Infof("No feature gates found for current version %s, using empty set", currentVersion)
}
Comment on lines +1186 to +1189
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment states log a warning, but the utilized klog uses Infof and not Warningf. Noting in case you wanted to propose an actual warning in the logs. I am personally fine with klog.V(2).Infof.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would probably leave this as info personally


return enabledGates
}

// shouldReconcileCVOConfiguration returns whether the CVO should reconcile its configuration using the API server.
//
// enabledFeatureGates must be initialized before the function is called.
Expand Down
Loading