Skip to content

Conversation

@jluhrsen
Copy link
Contributor

@jluhrsen jluhrsen commented Dec 19, 2025

CNO is going Degraded on the first connection issue with the API
server, but that can happen briefly on a new rollout or during
transient network issues. This is seen periodically in test cases
doing a new rollout like this one [0], which even does retries [1]
to work around the issue.

Instead of setting Degraded immediately on first failure, track
when failures start and only set Degraded after they persist for
2+ minutes. ETCD has a similar pattern and uses 2 minutes as it's
default. [2]

Also fixes a race condition in SetFromMachineConfigPool where
setNotDegraded() was being called for each non-degraded role in
a loop, which could clear failure tracking before checking all
roles. Restructured to use a two-pass approach: first check all
roles for degradation, then check progressing status.

[0] https://github.com/openshift/origin/blob/3854d32174b5e9ddaded1dfcc8a865bb28ca04ad/test/extended/networking/services.go#L26
[1] https://github.com/openshift/origin/blob/3854d32174b5e9ddaded1dfcc8a865bb28ca04ad/test/extended/networking/services.go#L57-L63
[2] https://github.com/openshift/cluster-etcd-operator/blob/d93728daa2fb69410025029740b0f8826479c4c3/pkg/operator/starter.go#L390

@coderabbitai
Copy link

coderabbitai bot commented Dec 19, 2025

Walkthrough

Adds a 2-minute persistence debounce for setting Degraded conditions using a clock abstraction and per-level first-seen timestamps; reorganizes MachineConfigPool status checks into a two-pass flow separating degraded detection from progressing/processing; updates tests to use a fake clock and validate time-based behavior.

Changes

Cohort / File(s) Summary
Status manager — persistence & clock
pkg/controller/statusmanager/status_manager.go
Adds failureFirstSeen map[StatusLevel]time.Time, a clock (clock.PassiveClock) and a degradedFailureDurationThreshold constant. Implements debouncing: record first-failure time, require 2-minute persistence before marking Degraded, and clear tracking when resolved. Constructor updated to initialize new fields.
MachineConfig status flow
pkg/controller/statusmanager/machineconfig_status.go
Reworks SetFromMachineConfigPool into a two-pass approach: first pass detects any degraded pools per role and sets/clears degraded status; second pass re-evaluates per-role pools for progressing/processing, handling paused pools and added/removed configs. Moves degraded-detection out of progressing path.
Tests — fake clock and timing
pkg/controller/statusmanager/status_manager_test.go
Introduces a fake/testing clock in tests, backdates failureFirstSeen entries, and advances the fake clock (e.g., +3 minutes) to verify debounced degraded transitions across MachineConfigPool, DaemonSet, Deployment, and Pod-related checks. Updates test sequences to assert time-dependent behavior.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

✨ Finishing touches
  • 📝 Generate docstrings

🧹 Recent nitpick comments
pkg/controller/statusmanager/machineconfig_status.go (1)

129-148: Consider caching pool lookups to avoid duplicate work.

findMachineConfigPoolsForLabel is called twice per role—once in the degraded check loop (line 130) and again in the progressing loop (line 145). This duplicates work that could be cached.

♻️ Suggested optimization: cache pools from first pass
 func (status *StatusManager) SetFromMachineConfigPool(mcPools []mcfgv1.MachineConfigPool) error {
 	status.Lock()
 	defer status.Unlock()
-	// First check if any role is degraded
+	// Cache pools per role and check if any role is degraded
+	poolsByRole := make(map[string][]mcfgv1.MachineConfigPool)
 	for role := range status.renderedMachineConfigs {
 		pools, err := status.findMachineConfigPoolsForLabel(mcPools, map[string]string{names.MachineConfigLabelRoleKey: role})
 		if err != nil {
 			klog.Errorf("failed to get machine config pools for the role %s: %v", role, err)
 		}
+		poolsByRole[role] = pools
 		degradedPool := status.isAnyMachineConfigPoolDegraded(pools)
 		if degradedPool != "" {
 			status.setDegraded(MachineConfig, "MachineConfig", fmt.Sprintf("%s machine config pool in degraded state", degradedPool))
 			return nil
 		}
 	}
 	// No degraded pools, so clear degraded status
 	status.setNotDegraded(MachineConfig)
 
 	// Now check for progressing and process machine configs
 	for role, machineConfigs := range status.renderedMachineConfigs {
-		pools, err := status.findMachineConfigPoolsForLabel(mcPools, map[string]string{names.MachineConfigLabelRoleKey: role})
-		if err != nil {
-			klog.Errorf("failed to get machine config pools for the role %s: %v", role, err)
-		}
+		pools := poolsByRole[role]
 
 		progressingPool := status.isAnyMachineConfigPoolProgressing(pools)
📜 Recent review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 1b94f0f and 4436d4c.

⛔ Files ignored due to path filters (3)
  • vendor/k8s.io/utils/clock/testing/fake_clock.go is excluded by !vendor/**, !**/vendor/**
  • vendor/k8s.io/utils/clock/testing/simple_interval_clock.go is excluded by !vendor/**, !**/vendor/**
  • vendor/modules.txt is excluded by !vendor/**, !**/vendor/**
📒 Files selected for processing (3)
  • pkg/controller/statusmanager/machineconfig_status.go
  • pkg/controller/statusmanager/status_manager.go
  • pkg/controller/statusmanager/status_manager_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • pkg/controller/statusmanager/status_manager_test.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

-Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity.

Files:

  • pkg/controller/statusmanager/machineconfig_status.go
  • pkg/controller/statusmanager/status_manager.go
🧬 Code graph analysis (1)
pkg/controller/statusmanager/machineconfig_status.go (2)
pkg/controller/statusmanager/status_manager.go (1)
  • MachineConfig (51-51)
pkg/names/names.go (1)
  • MachineConfigLabelRoleKey (177-177)
🔇 Additional comments (5)
pkg/controller/statusmanager/status_manager.go (4)

65-68: LGTM on the constant definition.

The 2-minute threshold aligns with the ETCD pattern mentioned in the PR objectives and provides a reasonable window to avoid flapping on transient failures.


112-115: LGTM on struct fields and initialization.

The failureFirstSeen map and clock.PassiveClock abstraction are well-designed for tracking per-level failure timestamps and enabling testability with fake clocks.

Also applies to: 147-148


558-578: Verify retry/reconciliation frequency ensures timely degraded detection.

The debounce logic is correct—first failure records timestamp, subsequent calls check persistence. However, this relies on the caller continuing to invoke setDegraded periodically for the condition to eventually be set after 2 minutes.

Ensure the reconciliation loop (or retry mechanism) runs frequently enough that failures are re-detected within a reasonable timeframe after the threshold elapses. If reconciliation intervals are significantly longer than 2 minutes, degraded detection could be delayed beyond the intended threshold.


580-584: LGTM on failure tracking cleanup.

Clearing the failureFirstSeen entry when a status level is no longer degraded ensures fresh tracking on subsequent failures.

pkg/controller/statusmanager/machineconfig_status.go (1)

128-141: LGTM on the two-pass degraded detection restructure.

Moving setNotDegraded(MachineConfig) outside the per-role loop fixes the race where it was previously called for each non-degraded role, which could prematurely clear failure tracking. Now degraded status is cleared only after confirming all roles have no degraded pools.

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.5.0)

Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions
The command is terminated due to an error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions


Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 19, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: jluhrsen
Once this PR has been reviewed and has the lgtm label, please assign knobunc for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
pkg/apply/apply.go (1)

130-135: Consider exponential backoff for faster recovery.

With Factor: 1.0, all retries wait a constant 5 seconds. For transient issues that resolve quickly (e.g., brief network blips), exponential backoff starting shorter would recover faster:

 	var backoff = wait.Backoff{
 		Steps:    6,
-		Duration: 5 * time.Second,
-		Factor:   1.0,
+		Duration: 1 * time.Second,
+		Factor:   2.0,
 		Jitter:   0.1,
 	}

This retries at ~1s, 2s, 4s, 8s, 16s, 32s intervals—faster initial recovery while still reaching similar total wait time.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 9d8ab48 and 60923fd.

📒 Files selected for processing (1)
  • pkg/apply/apply.go (3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

-Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity.

Files:

  • pkg/apply/apply.go

@jluhrsen jluhrsen changed the title add retry logic for transient api server errors CORENET-6605: add retry logic for transient api server errors Dec 19, 2025
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Dec 19, 2025
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Dec 19, 2025

@jluhrsen: This pull request references CORENET-6605 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

CNO is going Degraded on the first connection issue with the API server, but that can happen briefly on a new rollout. This is seen periodically in test cases doing a new rollout on purpose like this one [0]. even the test case does a retry [1] because of this.

[0] https://github.com/openshift/origin/blob/3854d32174b5e9ddaded1dfcc8a865bb28ca04ad/test/extended/networking/services.go#L26
[1] https://github.com/openshift/origin/blob/3854d32174b5e9ddaded1dfcc8a865bb28ca04ad/test/extended/networking/services.go#L57-L63

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@jluhrsen jluhrsen force-pushed the CORENET-6605 branch 2 times, most recently from dd0058d to df74afe Compare December 23, 2025 23:32
@jluhrsen
Copy link
Contributor Author

/payload-aggregate periodic-ci-openshift-release-master-ci-4.21-upgrade-from-stable-4.20-e2e-aws-ovn-upgrade 10

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 23, 2025

@jluhrsen: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-ci-4.21-upgrade-from-stable-4.20-e2e-aws-ovn-upgrade

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/fd326400-e057-11f0-99fb-9f2dbe12e85d-0

@jluhrsen
Copy link
Contributor Author

/payload-aggregate periodic-ci-openshift-release-master-nightly-4.21-e2e-aws-ovn-serial-2of2 10

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 24, 2025

@jluhrsen: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-aws-ovn-serial-2of2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/ad155810-e0f2-11f0-96b9-cee61ebafabc-0

@jluhrsen
Copy link
Contributor Author

/retest
/payload-aggregate periodic-ci-openshift-release-master-nightly-4.21-e2e-aws-ovn-serial-2of2 10

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 24, 2025

@jluhrsen: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-aws-ovn-serial-2of2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/c8beb9e0-e119-11f0-9d9d-69e629931e4c-0

@jluhrsen
Copy link
Contributor Author

/retest
/payload-aggregate periodic-ci-openshift-release-master-nightly-4.21-e2e-aws-ovn-serial-2of2 10

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 27, 2025

@jluhrsen: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-aws-ovn-serial-2of2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/d52d40b0-e2eb-11f0-9214-d727f6f6cf81-0

@jluhrsen
Copy link
Contributor Author

jluhrsen commented Jan 7, 2026

/payload-aggregate periodic-ci-openshift-release-master-nightly-4.21-e2e-aws-ovn-serial-2of2 10

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 7, 2026

@jluhrsen: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-aws-ovn-serial-2of2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/e3ed6bd0-eb64-11f0-9969-dc64703d602b-0

@jluhrsen
Copy link
Contributor Author

jluhrsen commented Jan 8, 2026

/payload-aggregate periodic-ci-openshift-release-master-nightly-4.21-e2e-aws-ovn-serial-2of2 10

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 8, 2026

@jluhrsen: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-aws-ovn-serial-2of2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/ad12e9b0-ecbc-11f0-8708-c2c4dd3a9afd-0

@jluhrsen
Copy link
Contributor Author

jluhrsen commented Jan 8, 2026

/test okd-scos-images

@jluhrsen
Copy link
Contributor Author

jluhrsen commented Jan 8, 2026

@jluhrsen: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-aws-ovn-serial-2of2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/e3ed6bd0-eb64-11f0-9969-dc64703d602b-0

this group of 10 did not see the issue (as expected)

@danwinship
Copy link
Contributor

CNO is going Degraded on the first connection issue with the API server

It seems like this is the real problem here; the docs say

Degraded indicates that the component (operator and all configured operands) does not match its desired state over a period of time resulting in a lower quality of service. The period of time may vary by component, but a Degraded state represents persistent observation of a condition.

(emphasis added)

We already queue a retry if ApplyObject fails. We just need to fix it so we don't set Degraded on the first failure. We should only set Degraded if we've failed N times, or if we've been failing for longer than N seconds/minutes.

(This problem is probably endemic to CNO's controllers...)

Copy link
Contributor

@danwinship danwinship left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(if we are going to merge the PR like this...)

Duration: 5 * time.Second,
Factor: 1.0,
Jitter: 0.1,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why these numbers, particularly given that the test code waits 1 minute?

err = retry.OnError(backoff, func(err error) bool {
// Don't retry on context cancellation (graceful shutdown)
return !errors.Is(err, context.Canceled) && !errors.Is(err, context.DeadlineExceeded)
}, func() error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code would probably be clearer if you split out the test into its own function, eg:

err = retry.OnError(backoff, isAPIServerRestartingError, func() error {
    ...
})

@jluhrsen jluhrsen force-pushed the CORENET-6605 branch 2 times, most recently from ff7b923 to 1b94f0f Compare January 14, 2026 00:05
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jan 14, 2026

@jluhrsen: This pull request references CORENET-6605 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

CNO is going Degraded on the first connection issue with the API
server, but that can happen briefly on a new rollout or during
transient network issues. This is seen periodically in test cases
doing a new rollout like this one [0], which even does retries [1]
to work around the issue.

Instead of setting Degraded immediately on first failure, track
when failures start and only set Degraded after they persist for
2+ minutes. ETCD has a similar pattern and uses 2 minutes as it's
default. [2]

Also fixes a race condition in SetFromMachineConfigPool where
setNotDegraded() was being called for each non-degraded role in
a loop, which could clear failure tracking before checking all
roles. Restructured to use a two-pass approach: first check all
roles for degradation, then check progressing status.

[0] https://github.com/openshift/origin/blob/3854d32174b5e9ddaded1dfcc8a865bb28ca04ad/test/extended/networking/services.go#L26
[1] https://github.com/openshift/origin/blob/3854d32174b5e9ddaded1dfcc8a865bb28ca04ad/test/extended/networking/services.go#L57-L63
[2] https://github.com/openshift/cluster-etcd-operator/blob/d93728daa2fb69410025029740b0f8826479c4c3/pkg/operator/starter.go#L390

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@jluhrsen jluhrsen changed the title CORENET-6605: add retry logic for transient api server errors CORENET-6605: avoid flapping Degraded on transient failures Jan 14, 2026
@jluhrsen
Copy link
Contributor Author

/payload-aggregate periodic-ci-openshift-release-master-nightly-4.21-e2e-aws-ovn-serial-2of2 10

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 14, 2026

@jluhrsen: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-aws-ovn-serial-2of2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/e5ca30f0-f0dc-11f0-8bd4-ffddbc3ce3e2-0

@jluhrsen
Copy link
Contributor Author

/test e2e-aws-ovn-upgrade

CNO is going Degraded on the first connection issue with the API
server, but that can happen briefly on a new rollout or during
transient network issues. This is seen periodically in test cases
doing a new rollout like this one [0], which even does retries [1]
to work around the issue.

Instead of setting Degraded immediately on first failure, track
when failures start and only set Degraded after they persist for
2+ minutes. ETCD has a similar pattern and uses 2 minutes as it's
default. [2]

Also fixes a race condition in SetFromMachineConfigPool where
setNotDegraded() was being called for each non-degraded role in
a loop, which could clear failure tracking before checking all
roles. Restructured to use a two-pass approach: first check all
roles for degradation, then check progressing status.

[0] https://github.com/openshift/origin/blob/3854d32174b5e9ddaded1dfcc8a865bb28ca04ad/test/extended/networking/services.go#L26
[1] https://github.com/openshift/origin/blob/3854d32174b5e9ddaded1dfcc8a865bb28ca04ad/test/extended/networking/services.go#L57-L63
[2] https://github.com/openshift/cluster-etcd-operator/blob/d93728daa2fb69410025029740b0f8826479c4c3/pkg/operator/starter.go#L390

Signed-off-by: Jamo Luhrsen <jluhrsen@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants