HyperFleet Adapter Framework - configuration driven framework to run tasks for cluster provisioning.
An instance of an adapter targets an specific resource such as a Cluster or NodePool. And provides a clear workflow:
- Listens to events: A CloudEvent informs what resource to process.
- Supports different types of brokers via hyperfleet-broker lib.
- Param phase: Gets parameters from environment, event and current resource status (by querying HyperFleet API)
- Decision phase: Computes where an action has to be performed to a resource using the params
- Resource phase: Creates resources using a configured client
- Kubernetes client: local or remote cluster
- Maestro client: remote cluster via Maestro server
- Status reporting: Reports result of task execution to HyperFleet API
- Builds the payload evaluating the status of the resources created in the resource phase
- Go 1.24.6 or later
- Docker (for building Docker images)
- Kubernetes 1.19+ (for deployment)
- Helm 3.0+ (for Helm chart deployment)
golangci-lint(for linting, optional)
git clone https://github.com/openshift-hyperfleet/hyperfleet-adapter.git
cd hyperfleet-adaptermake mod-tidy# Build the binary
make build
# The binary will be created at: bin/hyperfleet-adapter# Run unit tests
make test
# Run integration tests (pre-built envtest - unprivileged, CI/CD friendly)
make test-integration
# Run integration tests with K3s (faster, may need privileges)
make test-integration-k3s
# Run all tests
make test-all# Run linter
make lint
# Format code
make fmthyperfleet-adapter/
├── cmd/
│ └── adapter/ # Main application entry point
├── pkg/
│ ├── errors/ # Error handling utilities
│ └── logger/ # Structured logging with context support
├── internal/
│ ├── broker_consumer/ # Message broker consumer implementations
│ ├── config_loader/ # Configuration loading logic
│ ├── criteria/ # Precondition and CEL evaluation
│ ├── executor/ # Event execution engine
│ ├── hyperfleet_api/ # HyperFleet API client
│ └── k8s_client/ # Kubernetes client wrapper
├── test/ # Integration tests
├── charts/ # Helm chart for Kubernetes deployment
├── Dockerfile # Multi-stage Docker build
├── Makefile # Build and test automation
├── go.mod # Go module dependencies
└── README.md # This file
| Target | Description |
|---|---|
make build |
Build binary |
make test |
Run unit tests |
make test-integration |
Run integration tests with pre-built envtest (unprivileged, CI/CD friendly) |
make test-integration-k3s |
Run integration tests with K3s (faster, may need privileges) |
make test-all |
Run all tests (unit + integration) |
make test-coverage |
Generate test coverage report |
make lint |
Run golangci-lint |
make image |
Build container image |
make image-push |
Build and push container image to registry |
make image-dev |
Build and push to personal Quay registry (requires QUAY_USER) |
make fmt |
Format code |
make mod-tidy |
Tidy Go module dependencies |
make clean |
Clean build artifacts |
make verify |
Run lint and test |
💡 Tip: Use make help to see all available targets with descriptions
HyperFleet Adapter uses bingo to manage Go tool dependencies with pinned versions.
Managed tools:
goimports- Code formatting and import organizationgolangci-lint- Code linting
Common operations:
# Install all tools
bingo get
# Install a specific tool
bingo get <tool>
# Update a tool to latest version
bingo get <tool>@latest
# List all managed tools
bingo listTool versions are tracked in .bingo/*.mod files and loaded automatically via include .bingo/Variables.mk in the Makefile.
A HyperFleet Adapter requires several files for configuration:
- Adapter config: Configures the adapter framework application
- Adapter Task config: Configures the adapter task steps that will create resources
- Broker configuration: Configures the specific broker to use by the adapter framework to receive CloudEvents
To see all configuration options read configuration.md file
The adapter deployment configuration (AdapterConfig) controls runtime and infrastructure
settings for the adapter process, such as client connections, retries, and broker
subscription details. It is loaded with Viper, so values can be overridden by CLI flags
and environment variables in this priority order: CLI flags > env vars > file > defaults.
- Path:
HYPERFLEET_ADAPTER_CONFIG(required) - Common fields:
spec.adapter.version,spec.debugConfig,spec.clients.*(HyperFleet API, Maestro, broker, Kubernetes)
Reference examples:
configs/adapter-deployment-config.yaml(full reference with env/flag notes)charts/examples/adapter-config.yaml(minimal deployment example)
The adapter task configuration (AdapterTaskConfig) defines the business logic for
processing events: parameters, preconditions, resources to create, and post-actions.
This file is loaded as static YAML (no Viper overrides) and is required at runtime.
- Path:
HYPERFLEET_TASK_CONFIG(required) - Key sections:
spec.params,spec.preconditions,spec.resources,spec.post - Resource manifests: inline YAML or external file via
manifest.ref
Reference examples:
charts/examples/adapter-task-config.yaml(worked example)configs/adapter-task-config-template.yaml(complete schema reference)
Broker configuration is particular since responsibility is split between:
- Hyperfleet broker library: configures the connection to a concrete broker (google pubsub, rabbitmq, ...)
- Configured using a YAML file specified by the
BROKER_CONFIG_FILEenvironment variable
- Configured using a YAML file specified by the
- Adapter: configures which topic/subscriptions to use on the broker
- Configure topic/subscription in the
adapter-config.yamlbut can be overriden with env variables or cli params
- Configure topic/subscription in the
See the Helm chart documentation for broker configuration options.
The project includes a Helm chart for Kubernetes deployment.
# Install the chart
helm install hyperfleet-adapter ./charts/
# Install with custom values
helm install hyperfleet-adapter ./charts/ -f ./charts/examples/values.yaml
# Upgrade deployment
helm upgrade hyperfleet-adapter ./charts/
# Uninstall
helm delete hyperfleet-adapterFor detailed Helm chart documentation, see charts/README.md.
Build and push container images:
# Build container image
make image
# Build with custom tag
make image IMAGE_TAG=v1.0.0
# Build and push to default registry
make image-push
# Build and push to personal Quay registry (for development)
QUAY_USER=myuser make image-devDefault image: quay.io/openshift-hyperfleet/hyperfleet-adapter:latest
The container build automatically embeds version metadata (version, git commit, build date) into the binary. The git commit is passed from the build machine via --build-arg GIT_COMMIT. To override:
make image GIT_COMMIT=abc1234# Run unit tests (fast, no dependencies)
make testUnit tests include:
- Logger functionality and context handling
- Error handling and error codes
- Operation ID middleware
- Template rendering and parsing
- Kubernetes client logic
Integration tests use Testcontainers with dynamically installed envtest - works in any CI/CD platform without requiring privileged containers.
Click to expand: Setup and run integration tests
- Docker or Podman must be running (both fully supported!)
- Docker:
docker info - Podman:
podman info
- Docker:
- The Makefile automatically detects and configures your container runtime
- Podman users: Corporate proxy settings are auto-detected from Podman machine
# Run integration tests with pre-built envtest (default - unprivileged)
make test-integration
# Run integration tests with K3s (faster, may need privileges)
make test-integration-k3s
# Run all tests (unit + integration)
make test-all
# Generate coverage report
make test-coverageThe first run will download golang:alpine and install envtest (~20-30 seconds). Subsequent runs are faster with caching.
- ✅ Simple Setup: Just needs Docker/Podman (no binary installation, no custom Dockerfile)
- ✅ Unprivileged: Works in ANY CI/CD platform (OpenShift, Tekton, restricted runners)
- ✅ Real API: Kubernetes API server + etcd (sufficient for most integration tests)
- ✅ Podman Optimized: Auto-detects proxy, works in corporate networks
- ✅ CI/CD Ready: No privileged mode required
- ✅ Isolated: Fresh environment for each test suite
Performance: ~30-40 seconds for complete test suite (10 suites, 24 test cases).
Alternative: Use K3s (make test-integration-k3s) for 2x faster tests if privileged containers are available.
⚠️ Requires Docker or rootful Podman- ✅ Makefile automatically checks Podman mode and provides helpful instructions if incompatible
📖 Full guide: test/integration/k8s_client/README.md
# Generate coverage report
make test-coverage
# Generate HTML coverage report
make test-coverage-htmlExpected Total Coverage: ~65-75% (unit + integration tests)
📊 Test Status: See TEST_STATUS.md for detailed coverage analysis
The adapter uses structured logging with context-aware fields:
- Transaction ID (
txid): Request transaction identifier - Operation ID (
opid): Unique operation identifier - Adapter ID (
adapter_id): Adapter instance identifier - Cluster ID (
cluster_id): Cluster identifier
Logs are formatted with prefixes like: [opid=abc123][adapter_id=adapter-1] message
The adapter uses a structured error handling system:
- Error Codes: Standardized error codes with prefixes
- Error References: API references for error documentation
- Error Types: Common error types (NotFound, Validation, Conflict, etc.)
See pkg/errors/error.go for error handling implementation.
Welcome contributions! Please see CONTRIBUTING.md for guidelines on:
- Code style and standards
- Testing requirements
- Pull request process
- Commit message guidelines
All members of the hyperfleet team have write access to this repository.
If you're a team member and need access to this repository:
- Verify Organization Membership: Ensure you're a member of the
openshift-hyperfleetorganization - Check Team Assignment: Confirm you're added to the hyperfleet team within the organization
- Repository Permissions: All hyperfleet team members automatically receive write access
- OWNERS File: Code reviews and approvals are managed through the OWNERS file
For access issues, contact a repository administrator or organization owner.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
For issues, questions, or contributions, please open an issue on GitHub.