Skip to content

HYPERFLEET-590 - test: automate Clusters Resource Type - Workflow Validation#21

Open
86254860 wants to merge 1 commit intoopenshift-hyperfleet:mainfrom
86254860:HYPERFLEET-590
Open

HYPERFLEET-590 - test: automate Clusters Resource Type - Workflow Validation#21
86254860 wants to merge 1 commit intoopenshift-hyperfleet:mainfrom
86254860:HYPERFLEET-590

Conversation

@86254860
Copy link
Contributor

@86254860 86254860 commented Feb 6, 2026

Summary

  • Automate Clusters Resource Type - Workflow Validation test
  • Update the related test case design to align with the current code implementation
  • Rename gcp.json to luster-request.json

Testing

  • Passed

Summary by CodeRabbit

Tests

  • Enhanced cluster creation workflow validation with comprehensive status condition checks, including initial and final state verification.
  • Improved adapter metadata and condition validation throughout the cluster lifecycle.
  • Refined cluster cleanup process for more reliable test execution.

@openshift-ci
Copy link

openshift-ci bot commented Feb 6, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign aredenba-rh for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coderabbitai
Copy link

coderabbitai bot commented Feb 6, 2026

Walkthrough

This PR refactors the cluster creation end-to-end test to implement comprehensive workflow validation. The test flow changes from a partial, GCP-specific validation to a generalized cluster workflow that includes: API-based cluster creation, initial status verification (conditions set to false), polling until the Ready condition becomes true, detailed adapter-level condition and metadata validation, final state verification (conditions set to true), and cleanup via Kubernetes namespace deletion. The test helper functions are updated to use kubectl namespace deletion as a workaround for cluster cleanup instead of direct API deletion.

Sequence Diagram

sequenceDiagram
    actor Test as Test Suite
    participant API as Cluster API
    participant K8s as Kubernetes
    
    Test->>API: POST /clusters (create with cluster-request.json)
    API-->>Test: Cluster created (ID returned)
    Test->>API: GET /clusters/{id} (verify initial status)
    API-->>Test: Ready=false, Available=false
    Test->>API: Poll cluster status (wait for Ready)
    loop Until Ready=true
        API-->>Test: status update
    end
    Test->>API: GET /clusters/{id}/adapters (verify adapter conditions)
    API-->>Test: adapter metadata, conditions (Applied, Available, Health)
    Test->>API: GET /clusters/{id} (final status check)
    API-->>Test: Ready=true, Available=true
    Test->>K8s: kubectl delete namespace {cluster_id}
    K8s-->>Test: namespace deleted
    Test->>Test: cleanup complete
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested labels

lgtm, approved

Suggested reviewers

  • Mischulee
  • aredenba-rh
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly describes the main change: automating a Clusters Resource Type workflow validation test, which aligns with all modified files and the PR objectives.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@e2e/cluster/creation.go`:
- Around line 51-54: Replace the hardcoded "Available" string with the
client.ConditionTypeAvailable constant in calls that check cluster conditions:
update the HasResourceCondition invocation that currently passes "Available"
(where you call h.HasResourceCondition(cluster.Status.Conditions, "Available",
openapi.ResourceConditionStatusFalse)) to use client.ConditionTypeAvailable, and
likewise replace the other comparison/Expect that uses "Available" (the one
checking for ResourceConditionStatusTrue/False further down) to use
client.ConditionTypeAvailable so both condition checks consistently reference
the constant.

In `@test-design/testcases/cluster.md`:
- Line 111: Duplicate heading number: change the second "Step 5: Cleanup
resources" heading to "Step 6: Cleanup resources" so steps are sequential;
locate the heading text "Step 5: Cleanup resources" in the cluster.md content
and update the numeral to 6, and also scan for any in-document references or
cross-links that point to "Step 5: Cleanup resources" (or to step numbers after
it) and update them accordingly to preserve correct ordering.
🧹 Nitpick comments (2)
pkg/helper/helper.go (1)

41-57: Consider validating clusterID before shell execution.

While this is test code, passing clusterID directly to exec.CommandContext could be a concern if the value ever contains shell metacharacters. The ID comes from the API response, so it's likely safe, but a defensive validation could prevent issues if the ID format changes.

🛡️ Optional: Add basic validation for clusterID
 func (h *Helper) CleanupTestCluster(ctx context.Context, clusterID string) error {
+    // Basic validation to ensure clusterID is a valid Kubernetes namespace name
+    if clusterID == "" || strings.ContainsAny(clusterID, " \t\n;|&$`") {
+        return fmt.Errorf("invalid cluster ID: %q", clusterID)
+    }
+
     logger.Info("deleting cluster namespace (workaround)", "cluster_id", clusterID, "namespace", clusterID)

This would require adding "strings" to the imports.

e2e/cluster/creation.go (1)

81-82: The ObservedGeneration assertion assumes generation is always 1 for new clusters.

This is reasonable for a creation test but could become brittle if the system behavior changes (e.g., if generation starts at 0 or is incremented differently). Consider documenting this assumption or making the assertion more flexible.

💡 Alternative: Assert generation is positive instead of exactly 1
-                        g.Expect(adapter.ObservedGeneration).To(Equal(int32(1)),
-                            "adapter %s should have observed_generation=1 for new creation request", adapter.Adapter)
+                        g.Expect(adapter.ObservedGeneration).To(BeNumerically(">=", int32(1)),
+                            "adapter %s should have observed_generation >= 1 for creation request", adapter.Adapter)

Comment on lines +51 to +54
hasAvailableFalse := h.HasResourceCondition(cluster.Status.Conditions,
"Available", openapi.ResourceConditionStatusFalse)
Expect(hasAvailableFalse).To(BeTrue(),
"initial cluster conditions should have Available=False")
Copy link

@coderabbitai coderabbitai bot Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Search for ConditionType definitions in the codebase
rg -n "ConditionType" --type go -i

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 3175


🏁 Script executed:

# Look at the specific file and lines mentioned in the review
head -160 e2e/cluster/creation.go | tail -120

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 7132


🏁 Script executed:

# Check the client package imports and structure
find . -name "*.go" -path "*/client/*" | head -20

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 200


Replace hardcoded "Available" strings with client.ConditionTypeAvailable constant.

The code defines and uses the client.ConditionTypeAvailable constant elsewhere in the file (line 94), but uses hardcoded "Available" strings at lines 51-54 and 137-139. This inconsistency should be corrected for maintainability.

♻️ Suggested fix
-                hasAvailableFalse := h.HasResourceCondition(cluster.Status.Conditions,
-                    "Available", openapi.ResourceConditionStatusFalse)
+                hasAvailableFalse := h.HasResourceCondition(cluster.Status.Conditions,
+                    client.ConditionTypeAvailable, openapi.ResourceConditionStatusFalse)

And at lines 137-139:

-                hasAvailable := h.HasResourceCondition(finalCluster.Status.Conditions,
-                    "Available", openapi.ResourceConditionStatusTrue)
+                hasAvailable := h.HasResourceCondition(finalCluster.Status.Conditions,
+                    client.ConditionTypeAvailable, openapi.ResourceConditionStatusTrue)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
hasAvailableFalse := h.HasResourceCondition(cluster.Status.Conditions,
"Available", openapi.ResourceConditionStatusFalse)
Expect(hasAvailableFalse).To(BeTrue(),
"initial cluster conditions should have Available=False")
hasAvailableFalse := h.HasResourceCondition(cluster.Status.Conditions,
client.ConditionTypeAvailable, openapi.ResourceConditionStatusFalse)
Expect(hasAvailableFalse).To(BeTrue(),
"initial cluster conditions should have Available=False")
🤖 Prompt for AI Agents
In `@e2e/cluster/creation.go` around lines 51 - 54, Replace the hardcoded
"Available" string with the client.ConditionTypeAvailable constant in calls that
check cluster conditions: update the HasResourceCondition invocation that
currently passes "Available" (where you call
h.HasResourceCondition(cluster.Status.Conditions, "Available",
openapi.ResourceConditionStatusFalse)) to use client.ConditionTypeAvailable, and
likewise replace the other comparison/Expect that uses "Available" (the one
checking for ResourceConditionStatusTrue/False further down) to use
client.ConditionTypeAvailable so both condition checks consistently reference
the constant.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to use the constant value.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yasun1, thank you for confirming! Using the constant will improve maintainability and consistency across the codebase. 👍

- This confirms the cluster has reached the desired end state

#### Step 4: Cleanup resources
#### Step 5: Cleanup resources
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Duplicate step number: "Step 5" appears twice.

Line 99 defines "Step 5: Verify final cluster state" and line 111 defines another "Step 5: Cleanup resources". The cleanup step should be numbered "Step 6".

📝 Proposed fix
-#### Step 5: Cleanup resources
+#### Step 6: Cleanup resources
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#### Step 5: Cleanup resources
#### Step 6: Cleanup resources
🤖 Prompt for AI Agents
In `@test-design/testcases/cluster.md` at line 111, Duplicate heading number:
change the second "Step 5: Cleanup resources" heading to "Step 6: Cleanup
resources" so steps are sequential; locate the heading text "Step 5: Cleanup
resources" in the cluster.md content and update the numeral to 6, and also scan
for any in-document references or cross-links that point to "Step 5: Cleanup
resources" (or to step numbers after it) and update them accordingly to preserve
correct ordering.

// 4. Final cluster state verification
ginkgo.It("should validate complete workflow for clusters resource type from creation to Ready state",
func(ctx context.Context) {
ginkgo.By("Submit a \"clusters\" resource type request via API")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it better Submit an API request to create a Cluster resource?

Expect(cluster.Id).NotTo(BeNil(), "cluster ID should be generated")
clusterID = *cluster.Id
ginkgo.GinkgoWriter.Printf("Created cluster ID: %s\n", clusterID)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A redundant space line.

Comment on lines +51 to +54
hasAvailableFalse := h.HasResourceCondition(cluster.Status.Conditions,
"Available", openapi.ResourceConditionStatusFalse)
Expect(hasAvailableFalse).To(BeTrue(),
"initial cluster conditions should have Available=False")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to use the constant value.


hasAvailable := h.HasResourceCondition(finalCluster.Status.Conditions,
"Available", openapi.ResourceConditionStatusTrue)
Expect(hasAvailable).To(BeTrue(), "cluster should have Available=True condition")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The verification steps are somewhat reversed. We should verify that the status of all adapters is true, then verify that the available condition of the cluster resource is true, and then the ready condition.

But actually, as we are just validation the two cluster-level conditions, we could validation them before the adapter status. If that, I do not think we need the Verify final cluster state

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants