Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions configs/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -85,3 +85,24 @@ log:
# Log output: stdout, stderr
# Can be overridden by: HYPERFLEET_LOG_OUTPUT
output: stdout

# ============================================================================
# Adapter Configuration
# ============================================================================

adapters:
# Required adapters for cluster resources
# List of adapter names that must be present and have correct conditions
# when validating cluster adapter execution
#
# Can be overridden by: HYPERFLEET_ADAPTERS_CLUSTER (comma-separated)
cluster:
- "cl-namespace"

# Required adapters for nodepool resources
# List of adapter names that must be present and have correct conditions
# when validating nodepool adapter execution
#
# Can be overridden by: HYPERFLEET_ADAPTERS_NODEPOOL (comma-separated)
nodepool:
- "np-configmap"
Comment on lines +93 to +108
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Adapter names here don’t match code defaults and test expectations.

Line 100 and Line 108 use abbreviated names (cl-namespace, np-configmap), while the defaults and test design reference clusters-namespace, clusters-job, clusters-deployment, and nodepools-configmap. Because this file overrides defaults, adapter validation will likely fail unless names align.

🔧 Suggested fix
 adapters:
   # Required adapters for cluster resources
@@
   cluster:
-    - "cl-namespace"
+    - "clusters-namespace"
+    - "clusters-job"
+    - "clusters-deployment"
@@
   nodepool:
-    - "np-configmap"
+    - "nodepools-configmap"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
adapters:
# Required adapters for cluster resources
# List of adapter names that must be present and have correct conditions
# when validating cluster adapter execution
#
# Can be overridden by: HYPERFLEET_ADAPTERS_CLUSTER (comma-separated)
cluster:
- "cl-namespace"
# Required adapters for nodepool resources
# List of adapter names that must be present and have correct conditions
# when validating nodepool adapter execution
#
# Can be overridden by: HYPERFLEET_ADAPTERS_NODEPOOL (comma-separated)
nodepool:
- "np-configmap"
adapters:
# Required adapters for cluster resources
# List of adapter names that must be present and have correct conditions
# when validating cluster adapter execution
#
# Can be overridden by: HYPERFLEET_ADAPTERS_CLUSTER (comma-separated)
cluster:
- "clusters-namespace"
- "clusters-job"
- "clusters-deployment"
# Required adapters for nodepool resources
# List of adapter names that must be present and have correct conditions
# when validating nodepool adapter execution
#
# Can be overridden by: HYPERFLEET_ADAPTERS_NODEPOOL (comma-separated)
nodepool:
- "nodepools-configmap"
🤖 Prompt for AI Agents
In `@configs/config.yaml` around lines 93 - 108, The adapter names in the cluster
and nodepool lists are incorrect: replace "cl-namespace" with the full expected
adapter names (e.g., "clusters-namespace") and "np-configmap" with
"nodepools-configmap", and ensure the cluster list includes the other expected
adapters ("clusters-job" and "clusters-deployment") so they match code defaults
and test expectations; update the entries under the cluster and nodepool keys
(and verify compatibility with HYPERFLEET_ADAPTERS_CLUSTER /
HYPERFLEET_ADAPTERS_NODEPOOL overrides) so adapter validation uses the correct
names.

237 changes: 159 additions & 78 deletions e2e/cluster/creation.go
Original file line number Diff line number Diff line change
@@ -1,88 +1,169 @@
package cluster

import (
"context"
"context"

"github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega" //nolint:staticcheck // dot import for test readability
"github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega" //nolint:staticcheck // dot import for test readability

"github.com/openshift-hyperfleet/hyperfleet-e2e/pkg/helper"
"github.com/openshift-hyperfleet/hyperfleet-e2e/pkg/labels"
"github.com/openshift-hyperfleet/hyperfleet-e2e/pkg/api/openapi"
"github.com/openshift-hyperfleet/hyperfleet-e2e/pkg/client"
"github.com/openshift-hyperfleet/hyperfleet-e2e/pkg/helper"
"github.com/openshift-hyperfleet/hyperfleet-e2e/pkg/labels"
)

var lifecycleTestName = "[Suite: cluster][baseline] Full Cluster Creation Flow on GCP"
var lifecycleTestName = "[Suite: cluster][baseline] Clusters Resource Type - Workflow Validation"

var _ = ginkgo.Describe(lifecycleTestName,
ginkgo.Label(labels.Tier0),
func() {
var h *helper.Helper
var clusterID string

ginkgo.BeforeEach(func() {
h = helper.New()
})

ginkgo.It("should create GCP cluster and transition to Ready state with all adapters healthy", func(ctx context.Context) {
ginkgo.By("submitting cluster creation request via POST /api/hyperfleet/v1/clusters")
cluster, err := h.Client.CreateClusterFromPayload(ctx, "testdata/payloads/clusters/gcp.json")
Expect(err).NotTo(HaveOccurred(), "failed to create cluster")

ginkgo.By("verifying API response (HTTP 201 Created)")
Expect(cluster.Id).NotTo(BeNil(), "cluster ID should be generated")
clusterID = *cluster.Id
ginkgo.GinkgoWriter.Printf("Created cluster ID: %s\n", clusterID)

Expect(cluster.Status).NotTo(BeNil(), "cluster status should be present")
/** <TODO>
Expect(cluster.Status.Phase).To(Equal(openapi.NotReady), "cluster should be in NotReady phase initially")

Cluster final status depends on all deployed adapter result, this is still in progress.
Will update this part once adapter scope is finalized.
ginkgo.By("monitoring cluster status - waiting for phase transition to Ready")
err = h.WaitForClusterPhase(ctx, clusterID, openapi.Ready, h.Cfg.Timeouts.Cluster.Ready)
Expect(err).NotTo(HaveOccurred(), "cluster should reach Ready phase")

ginkgo.By("verifying all adapter conditions via /clusters/{id}/statuses endpoint")
const expectedAdapterCount = 1 // GCP cluster expects 1 adapter
Eventually(func(g Gomega) {
statuses, err := h.Client.GetClusterStatuses(ctx, clusterID)
g.Expect(err).NotTo(HaveOccurred(), "failed to get cluster statuses")
g.Expect(statuses.Items).To(HaveLen(expectedAdapterCount),
"expected %d adapter(s), got %d", expectedAdapterCount, len(statuses.Items))

for _, adapter := range statuses.Items {
hasApplied := h.HasCondition(adapter.Conditions, client.ConditionTypeApplied, openapi.True)
g.Expect(hasApplied).To(BeTrue(),
"adapter %s should have Applied=True", adapter.Adapter)

hasAvailable := h.HasCondition(adapter.Conditions, client.ConditionTypeAvailable, openapi.True)
g.Expect(hasAvailable).To(BeTrue(),
"adapter %s should have Available=True", adapter.Adapter)

hasHealth := h.HasCondition(adapter.Conditions, client.ConditionTypeHealth, openapi.True)
g.Expect(hasHealth).To(BeTrue(),
"adapter %s should have Health=True", adapter.Adapter)
}
}, h.Cfg.Timeouts.Adapter.Processing, h.Cfg.Polling.Interval).Should(Succeed())

ginkgo.By("verifying final cluster state")
finalCluster, err := h.Client.GetCluster(ctx, clusterID)
Expect(err).NotTo(HaveOccurred(), "failed to get final cluster state")
Expect(finalCluster.Status).NotTo(BeNil(), "cluster status should be present")
Expect(finalCluster.Status.Phase).To(Equal(openapi.Ready), "cluster phase should be Ready")
**/
})

ginkgo.AfterEach(func(ctx context.Context) {
// Skip cleanup if helper not initialized or no cluster created
if h == nil || clusterID == "" {
return
}

ginkgo.By("cleaning up cluster " + clusterID)
if err := h.CleanupTestCluster(ctx, clusterID); err != nil {
ginkgo.GinkgoWriter.Printf("Warning: failed to cleanup cluster %s: %v\n", clusterID, err)
}
})
},
ginkgo.Label(labels.Tier0),
func() {
var h *helper.Helper
var clusterID string

ginkgo.BeforeEach(func() {
h = helper.New()
})

// This test validates the end-to-end cluster lifecycle workflow:
// 1. Cluster creation via API with initial condition validation
// 2. Required adapter execution with comprehensive metadata validation
// 3. Final cluster state verification (Ready and Available conditions)
ginkgo.It("should validate complete workflow for clusters resource type from creation to Ready state",
func(ctx context.Context) {
ginkgo.By("Submit an API request to create a Cluster resource")
cluster, err := h.Client.CreateClusterFromPayload(ctx, "testdata/payloads/clusters/cluster-request.json")
Expect(err).NotTo(HaveOccurred(), "failed to create cluster")
Expect(cluster.Id).NotTo(BeNil(), "cluster ID should be generated")
clusterID = *cluster.Id
ginkgo.GinkgoWriter.Printf("Created cluster ID: %s\n", clusterID)
Expect(cluster.Status).NotTo(BeNil(), "cluster status should be present")

ginkgo.By("Verify initial status of cluster")
// Verify initial conditions are False, indicating workflow has not completed yet
// This ensures the cluster starts in the correct initial state
hasReadyFalse := h.HasResourceCondition(cluster.Status.Conditions,
client.ConditionTypeReady, openapi.ResourceConditionStatusFalse)
Expect(hasReadyFalse).To(BeTrue(),
"initial cluster conditions should have Ready=False")

hasAvailableFalse := h.HasResourceCondition(cluster.Status.Conditions,
client.ConditionTypeAvailable, openapi.ResourceConditionStatusFalse)
Expect(hasAvailableFalse).To(BeTrue(),
"initial cluster conditions should have Available=False")

ginkgo.By("Verify required adapter execution results")
// Validate required adapters from config have completed successfully
// If an adapter fails, we can identify which specific adapter failed
Eventually(func(g Gomega) {
statuses, err := h.Client.GetClusterStatuses(ctx, clusterID)
g.Expect(err).NotTo(HaveOccurred(), "failed to get cluster statuses")
g.Expect(statuses.Items).NotTo(BeEmpty(), "at least one adapter should have executed")

// Build a map of adapter statuses for easy lookup
adapterMap := make(map[string]openapi.AdapterStatus)
for _, adapter := range statuses.Items {
adapterMap[adapter.Adapter] = adapter
}

// Validate each required adapter from config
for _, requiredAdapter := range h.Cfg.Adapters.Cluster {
adapter, exists := adapterMap[requiredAdapter]
g.Expect(exists).To(BeTrue(),
"required adapter %s should be present in adapter statuses", requiredAdapter)

// Validate adapter-level metadata
g.Expect(adapter.CreatedTime).NotTo(BeZero(),
"adapter %s should have valid created_time", adapter.Adapter)
g.Expect(adapter.LastReportTime).NotTo(BeZero(),
"adapter %s should have valid last_report_time", adapter.Adapter)
g.Expect(adapter.ObservedGeneration).To(Equal(int32(1)),
"adapter %s should have observed_generation=1 for new creation request", adapter.Adapter)

hasApplied := h.HasAdapterCondition(
adapter.Conditions,
client.ConditionTypeApplied,
openapi.AdapterConditionStatusTrue,
)
g.Expect(hasApplied).To(BeTrue(),
"adapter %s should have Applied=True", adapter.Adapter)

hasAvailable := h.HasAdapterCondition(
adapter.Conditions,
client.ConditionTypeAvailable,
openapi.AdapterConditionStatusTrue,
)
g.Expect(hasAvailable).To(BeTrue(),
"adapter %s should have Available=True", adapter.Adapter)

hasHealth := h.HasAdapterCondition(
adapter.Conditions,
client.ConditionTypeHealth,
openapi.AdapterConditionStatusTrue,
)
g.Expect(hasHealth).To(BeTrue(),
"adapter %s should have Health=True", adapter.Adapter)

// Validate condition metadata for each condition
for _, condition := range adapter.Conditions {
g.Expect(condition.Reason).NotTo(BeNil(),
"adapter %s condition %s should have non-nil reason", adapter.Adapter, condition.Type)
g.Expect(*condition.Reason).NotTo(BeEmpty(),
"adapter %s condition %s should have non-empty reason", adapter.Adapter, condition.Type)

g.Expect(condition.Message).NotTo(BeNil(),
"adapter %s condition %s should have non-nil message", adapter.Adapter, condition.Type)
g.Expect(*condition.Message).NotTo(BeEmpty(),
"adapter %s condition %s should have non-empty message", adapter.Adapter, condition.Type)

g.Expect(condition.LastTransitionTime).NotTo(BeZero(),
"adapter %s condition %s should have valid last_transition_time", adapter.Adapter, condition.Type)
}
Comment on lines +57 to +120
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat e2e/cluster/creation.go | head -150

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 8687


🌐 Web query:

Gomega Eventually function behavior nil dereference panic

💡 Result:

In Gomega (onsi/gomega), when you pass a function to Eventually, Gomega polls that function synchronously until the matcher passes or the timeout/context ends. (onsi.github.io)

A regular Go runtime panic inside that polled function—like panic: runtime error: invalid memory address or nil pointer dereference—is not an “Eventually failure”; it’s a bug in the polled code. In practice, it will escape the polling loop and abort the test/spec unless you handle it yourself (e.g., avoid nil dereferences or wrap the body with your own defer/recover).

The one important exception is that Gomega intentionally uses panics as control signals for async polling in specific cases (e.g. StopTrying(...).Now() and TryAgainAfter(...).Now()), and Eventually/Consistently are designed to interpret those signals (stop or delay polling) rather than crash the test. (onsi.github.io)

So, if you’re seeing nil-deref panics in an Eventually(func() ...), the fix is to make the polled function nil-safe (return a value/error you can assert on) rather than letting it dereference nil.

Citations:


Guard Reason/Message before dereference inside Eventually.

In the Eventually block, g.Expect failures do not abort execution, so dereferencing nil pointers with *condition.Reason / *condition.Message at lines 118 and 123 will panic and escape the polling loop rather than allowing the retry cycle to continue.

🛠️ Suggested fix
                        g.Expect(condition.Reason).NotTo(BeNil(),
                            "adapter %s condition %s should have non-nil reason", adapter.Adapter, condition.Type)
+                        if condition.Reason != nil {
                            g.Expect(*condition.Reason).NotTo(BeEmpty(),
                                "adapter %s condition %s should have non-empty reason", adapter.Adapter, condition.Type)
+                        }

                        g.Expect(condition.Message).NotTo(BeNil(),
                            "adapter %s condition %s should have non-nil message", adapter.Adapter, condition.Type)
+                        if condition.Message != nil {
                            g.Expect(*condition.Message).NotTo(BeEmpty(),
                                "adapter %s condition %s should have non-empty message", adapter.Adapter, condition.Type)
+                        }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Eventually(func(g Gomega) {
statuses, err := h.Client.GetClusterStatuses(ctx, clusterID)
g.Expect(err).NotTo(HaveOccurred(), "failed to get cluster statuses")
g.Expect(statuses.Items).NotTo(BeEmpty(), "at least one adapter should have executed")
// Build a map of adapter statuses for easy lookup
adapterMap := make(map[string]openapi.AdapterStatus)
for _, adapter := range statuses.Items {
adapterMap[adapter.Adapter] = adapter
}
// Validate each required adapter from config
for _, requiredAdapter := range h.Cfg.Adapters.Cluster {
adapter, exists := adapterMap[requiredAdapter]
g.Expect(exists).To(BeTrue(),
"required adapter %s should be present in adapter statuses", requiredAdapter)
// Validate adapter-level metadata
g.Expect(adapter.CreatedTime).NotTo(BeZero(),
"adapter %s should have valid created_time", adapter.Adapter)
g.Expect(adapter.LastReportTime).NotTo(BeZero(),
"adapter %s should have valid last_report_time", adapter.Adapter)
g.Expect(adapter.ObservedGeneration).To(Equal(int32(1)),
"adapter %s should have observed_generation=1 for new creation request", adapter.Adapter)
hasApplied := h.HasAdapterCondition(
adapter.Conditions,
client.ConditionTypeApplied,
openapi.AdapterConditionStatusTrue,
)
g.Expect(hasApplied).To(BeTrue(),
"adapter %s should have Applied=True", adapter.Adapter)
hasAvailable := h.HasAdapterCondition(
adapter.Conditions,
client.ConditionTypeAvailable,
openapi.AdapterConditionStatusTrue,
)
g.Expect(hasAvailable).To(BeTrue(),
"adapter %s should have Available=True", adapter.Adapter)
hasHealth := h.HasAdapterCondition(
adapter.Conditions,
client.ConditionTypeHealth,
openapi.AdapterConditionStatusTrue,
)
g.Expect(hasHealth).To(BeTrue(),
"adapter %s should have Health=True", adapter.Adapter)
// Validate condition metadata for each condition
for _, condition := range adapter.Conditions {
g.Expect(condition.Reason).NotTo(BeNil(),
"adapter %s condition %s should have non-nil reason", adapter.Adapter, condition.Type)
g.Expect(*condition.Reason).NotTo(BeEmpty(),
"adapter %s condition %s should have non-empty reason", adapter.Adapter, condition.Type)
g.Expect(condition.Message).NotTo(BeNil(),
"adapter %s condition %s should have non-nil message", adapter.Adapter, condition.Type)
g.Expect(*condition.Message).NotTo(BeEmpty(),
"adapter %s condition %s should have non-empty message", adapter.Adapter, condition.Type)
g.Expect(condition.LastTransitionTime).NotTo(BeZero(),
"adapter %s condition %s should have valid last_transition_time", adapter.Adapter, condition.Type)
}
Eventually(func(g Gomega) {
statuses, err := h.Client.GetClusterStatuses(ctx, clusterID)
g.Expect(err).NotTo(HaveOccurred(), "failed to get cluster statuses")
g.Expect(statuses.Items).NotTo(BeEmpty(), "at least one adapter should have executed")
// Build a map of adapter statuses for easy lookup
adapterMap := make(map[string]openapi.AdapterStatus)
for _, adapter := range statuses.Items {
adapterMap[adapter.Adapter] = adapter
}
// Validate each required adapter from config
for _, requiredAdapter := range h.Cfg.Adapters.Cluster {
adapter, exists := adapterMap[requiredAdapter]
g.Expect(exists).To(BeTrue(),
"required adapter %s should be present in adapter statuses", requiredAdapter)
// Validate adapter-level metadata
g.Expect(adapter.CreatedTime).NotTo(BeZero(),
"adapter %s should have valid created_time", adapter.Adapter)
g.Expect(adapter.LastReportTime).NotTo(BeZero(),
"adapter %s should have valid last_report_time", adapter.Adapter)
g.Expect(adapter.ObservedGeneration).To(Equal(int32(1)),
"adapter %s should have observed_generation=1 for new creation request", adapter.Adapter)
hasApplied := h.HasAdapterCondition(
adapter.Conditions,
client.ConditionTypeApplied,
openapi.AdapterConditionStatusTrue,
)
g.Expect(hasApplied).To(BeTrue(),
"adapter %s should have Applied=True", adapter.Adapter)
hasAvailable := h.HasAdapterCondition(
adapter.Conditions,
client.ConditionTypeAvailable,
openapi.AdapterConditionStatusTrue,
)
g.Expect(hasAvailable).To(BeTrue(),
"adapter %s should have Available=True", adapter.Adapter)
hasHealth := h.HasAdapterCondition(
adapter.Conditions,
client.ConditionTypeHealth,
openapi.AdapterConditionStatusTrue,
)
g.Expect(hasHealth).To(BeTrue(),
"adapter %s should have Health=True", adapter.Adapter)
// Validate condition metadata for each condition
for _, condition := range adapter.Conditions {
g.Expect(condition.Reason).NotTo(BeNil(),
"adapter %s condition %s should have non-nil reason", adapter.Adapter, condition.Type)
if condition.Reason != nil {
g.Expect(*condition.Reason).NotTo(BeEmpty(),
"adapter %s condition %s should have non-empty reason", adapter.Adapter, condition.Type)
}
g.Expect(condition.Message).NotTo(BeNil(),
"adapter %s condition %s should have non-nil message", adapter.Adapter, condition.Type)
if condition.Message != nil {
g.Expect(*condition.Message).NotTo(BeEmpty(),
"adapter %s condition %s should have non-empty message", adapter.Adapter, condition.Type)
}
g.Expect(condition.LastTransitionTime).NotTo(BeZero(),
"adapter %s condition %s should have valid last_transition_time", adapter.Adapter, condition.Type)
}
🤖 Prompt for AI Agents
In `@e2e/cluster/creation.go` around lines 57 - 120, The Eventually block loops
over adapter.Conditions and currently dereferences condition.Reason and
condition.Message after non-fatal g.Expect checks, which can panic during
retries; keep the existing g.Expect(condition.Reason).NotTo(BeNil(), ...) and
g.Expect(condition.Message).NotTo(BeNil(), ...) but only dereference inside a
guarded branch (e.g., if condition.Reason != nil {
g.Expect(*condition.Reason).NotTo(BeEmpty(), ...) } and if condition.Message !=
nil { g.Expect(*condition.Message).NotTo(BeEmpty(), ...) }) within the same loop
so nil pointers are never dereferenced during the polling in the Eventually
block (refer to the Eventually block, the loop over adapter.Conditions, and the
fields condition.Reason / condition.Message).

}
}, h.Cfg.Timeouts.Adapter.Processing, h.Cfg.Polling.Interval).Should(Succeed())

ginkgo.By("Verify final cluster state")
// Wait for cluster Ready condition and verify both Ready and Available conditions are True
// This confirms the cluster has reached the desired end state
err = h.WaitForClusterCondition(
ctx,
clusterID,
client.ConditionTypeReady,
openapi.ResourceConditionStatusTrue,
h.Cfg.Timeouts.Cluster.Ready,
)
Expect(err).NotTo(HaveOccurred(), "cluster Ready condition should transition to True")

finalCluster, err := h.Client.GetCluster(ctx, clusterID)
Expect(err).NotTo(HaveOccurred(), "failed to get final cluster state")
Expect(finalCluster.Status).NotTo(BeNil(), "cluster status should be present")

hasReady := h.HasResourceCondition(finalCluster.Status.Conditions,
client.ConditionTypeReady, openapi.ResourceConditionStatusTrue)
Expect(hasReady).To(BeTrue(), "cluster should have Ready=True condition")

hasAvailable := h.HasResourceCondition(finalCluster.Status.Conditions,
client.ConditionTypeAvailable, openapi.ResourceConditionStatusTrue)
Expect(hasAvailable).To(BeTrue(), "cluster should have Available=True condition")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The verification steps are somewhat reversed. We should verify that the status of all adapters is true, then verify that the available condition of the cluster resource is true, and then the ready condition.

But actually, as we are just validation the two cluster-level conditions, we could validation them before the adapter status. If that, I do not think we need the Verify final cluster state

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! I've adjust the verification order. And add new input configs to define the required adapters for clusters and nodepools, which could enable E2E testing cases know what adapters it should check.

Comment on lines +124 to +146
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's verify the file exists and check the lines in question
head -150 e2e/cluster/creation.go | tail -50

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 3274


🏁 Script executed:

# Let's find the implementation of WaitForClusterCondition
rg -n "func.*WaitForClusterCondition" -A 20

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 1556


🏁 Script executed:

# Let's find the implementation of HasResourceCondition
rg -n "func.*HasResourceCondition" -A 15

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 1080


🏁 Script executed:

# Let's check if there are other tests that wait for Available condition
rg -n "WaitForClusterCondition.*Available" -B 2 -A 2

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 61


🏁 Script executed:

# Let's check if Available is being waited for anywhere in tests
rg -n "Available" e2e/ | head -30

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 2048


🏁 Script executed:

# Let's look at the adapter checking pattern around line 85-100
sed -n '70,105p' e2e/cluster/creation.go

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 2097


🏁 Script executed:

# Let's see the full Eventually() block that wraps the adapter checks
sed -n '50,105p' e2e/cluster/creation.go

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 3319


🏁 Script executed:

# Let's look at the timeout configurations
cat pkg/helper/config.go | grep -A 5 -B 5 "Timeouts"

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 131


🏁 Script executed:

# Let's find the config file
rg -l "Timeouts.*Cluster" --type go

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 148


🏁 Script executed:

# Let's examine the config file
rg -n "Timeouts" pkg/config/config.go -A 10 | head -50

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 1975


🏁 Script executed:

# Let's check if there are any other tests that wait for multiple conditions sequentially
rg -n "WaitForClusterCondition.*Ready" -A 5 -B 2

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 853


🏁 Script executed:

# Let's verify the full context around the suggested fix one more time to ensure accuracy
sed -n '120,155p' e2e/cluster/creation.go

Repository: openshift-hyperfleet/hyperfleet-e2e

Length of output: 2175


Wait for Available=True to avoid readiness flake.

The code waits for the cluster Ready condition (line 127-135) but then immediately asserts that Available is also True (line 144-146) without waiting for it. If the Available condition transitions after Ready, this assertion can intermittently fail. Add a wait for Available before fetching and checking the final state.

🛠️ Suggested fix
                 err = h.WaitForClusterCondition(
                     ctx,
                     clusterID,
                     client.ConditionTypeReady,
                     openapi.ResourceConditionStatusTrue,
                     h.Cfg.Timeouts.Cluster.Ready,
                 )
                 Expect(err).NotTo(HaveOccurred(), "cluster Ready condition should transition to True")
+
+                err = h.WaitForClusterCondition(
+                    ctx,
+                    clusterID,
+                    client.ConditionTypeAvailable,
+                    openapi.ResourceConditionStatusTrue,
+                    h.Cfg.Timeouts.Cluster.Ready,
+                )
+                Expect(err).NotTo(HaveOccurred(), "cluster Available condition should transition to True")
🤖 Prompt for AI Agents
In `@e2e/cluster/creation.go` around lines 124 - 146, The test waits for Ready but
then immediately asserts Available, which can flake; call
h.WaitForClusterCondition for client.ConditionTypeAvailable (with
openapi.ResourceConditionStatusTrue and an appropriate timeout, e.g.,
h.Cfg.Timeouts.Cluster.Available or the Ready timeout) before calling
h.Client.GetCluster and before using h.HasResourceCondition to check Available,
so the Available condition is waited for and settled prior to the final
assertions.


// Validate observedGeneration for Ready and Available conditions
for _, condition := range finalCluster.Status.Conditions {
if condition.Type == client.ConditionTypeReady || condition.Type == client.ConditionTypeAvailable {
Expect(condition.ObservedGeneration).To(Equal(int32(1)),
"cluster condition %s should have observed_generation=1 for new creation request", condition.Type)
}
}
})

ginkgo.AfterEach(func(ctx context.Context) {
// Skip cleanup if helper not initialized or no cluster created
if h == nil || clusterID == "" {
return
}

ginkgo.By("cleaning up cluster " + clusterID)
if err := h.CleanupTestCluster(ctx, clusterID); err != nil {
ginkgo.GinkgoWriter.Printf("Warning: failed to cleanup cluster %s: %v\n", clusterID, err)
}
})
},
)
Loading