HYPERFLEET-590 - test: automate Clusters Resource Type - Workflow Validation#21
HYPERFLEET-590 - test: automate Clusters Resource Type - Workflow Validation#2186254860 wants to merge 1 commit intoopenshift-hyperfleet:mainfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
WalkthroughThis PR refactors the cluster creation end-to-end test to implement comprehensive workflow validation. The test flow changes from a partial, GCP-specific validation to a generalized cluster workflow that includes: API-based cluster creation, initial status verification (conditions set to false), polling until the Ready condition becomes true, detailed adapter-level condition and metadata validation, final state verification (conditions set to true), and cleanup via Kubernetes namespace deletion. The test helper functions are updated to use kubectl namespace deletion as a workaround for cluster cleanup instead of direct API deletion. Sequence DiagramsequenceDiagram
actor Test as Test Suite
participant API as Cluster API
participant K8s as Kubernetes
Test->>API: POST /clusters (create with cluster-request.json)
API-->>Test: Cluster created (ID returned)
Test->>API: GET /clusters/{id} (verify initial status)
API-->>Test: Ready=false, Available=false
Test->>API: Poll cluster status (wait for Ready)
loop Until Ready=true
API-->>Test: status update
end
Test->>API: GET /clusters/{id}/adapters (verify adapter conditions)
API-->>Test: adapter metadata, conditions (Applied, Available, Health)
Test->>API: GET /clusters/{id} (final status check)
API-->>Test: Ready=true, Available=true
Test->>K8s: kubectl delete namespace {cluster_id}
K8s-->>Test: namespace deleted
Test->>Test: cleanup complete
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@e2e/cluster/creation.go`:
- Around line 51-54: Replace the hardcoded "Available" string with the
client.ConditionTypeAvailable constant in calls that check cluster conditions:
update the HasResourceCondition invocation that currently passes "Available"
(where you call h.HasResourceCondition(cluster.Status.Conditions, "Available",
openapi.ResourceConditionStatusFalse)) to use client.ConditionTypeAvailable, and
likewise replace the other comparison/Expect that uses "Available" (the one
checking for ResourceConditionStatusTrue/False further down) to use
client.ConditionTypeAvailable so both condition checks consistently reference
the constant.
In `@test-design/testcases/cluster.md`:
- Line 111: Duplicate heading number: change the second "Step 5: Cleanup
resources" heading to "Step 6: Cleanup resources" so steps are sequential;
locate the heading text "Step 5: Cleanup resources" in the cluster.md content
and update the numeral to 6, and also scan for any in-document references or
cross-links that point to "Step 5: Cleanup resources" (or to step numbers after
it) and update them accordingly to preserve correct ordering.
🧹 Nitpick comments (2)
pkg/helper/helper.go (1)
41-57: Consider validatingclusterIDbefore shell execution.While this is test code, passing
clusterIDdirectly toexec.CommandContextcould be a concern if the value ever contains shell metacharacters. The ID comes from the API response, so it's likely safe, but a defensive validation could prevent issues if the ID format changes.🛡️ Optional: Add basic validation for clusterID
func (h *Helper) CleanupTestCluster(ctx context.Context, clusterID string) error { + // Basic validation to ensure clusterID is a valid Kubernetes namespace name + if clusterID == "" || strings.ContainsAny(clusterID, " \t\n;|&$`") { + return fmt.Errorf("invalid cluster ID: %q", clusterID) + } + logger.Info("deleting cluster namespace (workaround)", "cluster_id", clusterID, "namespace", clusterID)This would require adding
"strings"to the imports.e2e/cluster/creation.go (1)
81-82: TheObservedGenerationassertion assumes generation is always 1 for new clusters.This is reasonable for a creation test but could become brittle if the system behavior changes (e.g., if generation starts at 0 or is incremented differently). Consider documenting this assumption or making the assertion more flexible.
💡 Alternative: Assert generation is positive instead of exactly 1
- g.Expect(adapter.ObservedGeneration).To(Equal(int32(1)), - "adapter %s should have observed_generation=1 for new creation request", adapter.Adapter) + g.Expect(adapter.ObservedGeneration).To(BeNumerically(">=", int32(1)), + "adapter %s should have observed_generation >= 1 for creation request", adapter.Adapter)
| hasAvailableFalse := h.HasResourceCondition(cluster.Status.Conditions, | ||
| "Available", openapi.ResourceConditionStatusFalse) | ||
| Expect(hasAvailableFalse).To(BeTrue(), | ||
| "initial cluster conditions should have Available=False") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for ConditionType definitions in the codebase
rg -n "ConditionType" --type go -iRepository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 3175
🏁 Script executed:
# Look at the specific file and lines mentioned in the review
head -160 e2e/cluster/creation.go | tail -120Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 7132
🏁 Script executed:
# Check the client package imports and structure
find . -name "*.go" -path "*/client/*" | head -20Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 200
Replace hardcoded "Available" strings with client.ConditionTypeAvailable constant.
The code defines and uses the client.ConditionTypeAvailable constant elsewhere in the file (line 94), but uses hardcoded "Available" strings at lines 51-54 and 137-139. This inconsistency should be corrected for maintainability.
♻️ Suggested fix
- hasAvailableFalse := h.HasResourceCondition(cluster.Status.Conditions,
- "Available", openapi.ResourceConditionStatusFalse)
+ hasAvailableFalse := h.HasResourceCondition(cluster.Status.Conditions,
+ client.ConditionTypeAvailable, openapi.ResourceConditionStatusFalse)And at lines 137-139:
- hasAvailable := h.HasResourceCondition(finalCluster.Status.Conditions,
- "Available", openapi.ResourceConditionStatusTrue)
+ hasAvailable := h.HasResourceCondition(finalCluster.Status.Conditions,
+ client.ConditionTypeAvailable, openapi.ResourceConditionStatusTrue)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| hasAvailableFalse := h.HasResourceCondition(cluster.Status.Conditions, | |
| "Available", openapi.ResourceConditionStatusFalse) | |
| Expect(hasAvailableFalse).To(BeTrue(), | |
| "initial cluster conditions should have Available=False") | |
| hasAvailableFalse := h.HasResourceCondition(cluster.Status.Conditions, | |
| client.ConditionTypeAvailable, openapi.ResourceConditionStatusFalse) | |
| Expect(hasAvailableFalse).To(BeTrue(), | |
| "initial cluster conditions should have Available=False") |
🤖 Prompt for AI Agents
In `@e2e/cluster/creation.go` around lines 51 - 54, Replace the hardcoded
"Available" string with the client.ConditionTypeAvailable constant in calls that
check cluster conditions: update the HasResourceCondition invocation that
currently passes "Available" (where you call
h.HasResourceCondition(cluster.Status.Conditions, "Available",
openapi.ResourceConditionStatusFalse)) to use client.ConditionTypeAvailable, and
likewise replace the other comparison/Expect that uses "Available" (the one
checking for ResourceConditionStatusTrue/False further down) to use
client.ConditionTypeAvailable so both condition checks consistently reference
the constant.
There was a problem hiding this comment.
+1 to use the constant value.
There was a problem hiding this comment.
@yasun1, thank you for confirming! Using the constant will improve maintainability and consistency across the codebase. 👍
| - This confirms the cluster has reached the desired end state | ||
|
|
||
| #### Step 4: Cleanup resources | ||
| #### Step 5: Cleanup resources |
There was a problem hiding this comment.
Duplicate step number: "Step 5" appears twice.
Line 99 defines "Step 5: Verify final cluster state" and line 111 defines another "Step 5: Cleanup resources". The cleanup step should be numbered "Step 6".
📝 Proposed fix
-#### Step 5: Cleanup resources
+#### Step 6: Cleanup resources📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #### Step 5: Cleanup resources | |
| #### Step 6: Cleanup resources |
🤖 Prompt for AI Agents
In `@test-design/testcases/cluster.md` at line 111, Duplicate heading number:
change the second "Step 5: Cleanup resources" heading to "Step 6: Cleanup
resources" so steps are sequential; locate the heading text "Step 5: Cleanup
resources" in the cluster.md content and update the numeral to 6, and also scan
for any in-document references or cross-links that point to "Step 5: Cleanup
resources" (or to step numbers after it) and update them accordingly to preserve
correct ordering.
| // 4. Final cluster state verification | ||
| ginkgo.It("should validate complete workflow for clusters resource type from creation to Ready state", | ||
| func(ctx context.Context) { | ||
| ginkgo.By("Submit a \"clusters\" resource type request via API") |
There was a problem hiding this comment.
Is it better Submit an API request to create a Cluster resource?
| Expect(cluster.Id).NotTo(BeNil(), "cluster ID should be generated") | ||
| clusterID = *cluster.Id | ||
| ginkgo.GinkgoWriter.Printf("Created cluster ID: %s\n", clusterID) | ||
|
|
| hasAvailableFalse := h.HasResourceCondition(cluster.Status.Conditions, | ||
| "Available", openapi.ResourceConditionStatusFalse) | ||
| Expect(hasAvailableFalse).To(BeTrue(), | ||
| "initial cluster conditions should have Available=False") |
There was a problem hiding this comment.
+1 to use the constant value.
|
|
||
| hasAvailable := h.HasResourceCondition(finalCluster.Status.Conditions, | ||
| "Available", openapi.ResourceConditionStatusTrue) | ||
| Expect(hasAvailable).To(BeTrue(), "cluster should have Available=True condition") |
There was a problem hiding this comment.
The verification steps are somewhat reversed. We should verify that the status of all adapters is true, then verify that the available condition of the cluster resource is true, and then the ready condition.
But actually, as we are just validation the two cluster-level conditions, we could validation them before the adapter status. If that, I do not think we need the Verify final cluster state
Summary
Testing
Summary by CodeRabbit
Tests