Skip to content

HyperFleet Adapter Framework - Event-driven adapter services for HyperFleet cluster provisioning. Handles CloudEvents consumption, AdapterConfig CRD integration, precondition evaluation, Kubernetes Job creation/monitoring, and status reporting via API. Supports GCP Pub/Sub, RabbitMQ broker abstraction.

License

Notifications You must be signed in to change notification settings

openshift-hyperfleet/hyperfleet-adapter

Repository files navigation

HyperFleet Adapter

HyperFleet Adapter Framework - configuration driven framework to run tasks for cluster provisioning.

An instance of an adapter targets an specific resource such as a Cluster or NodePool. And provides a clear workflow:

  • Listens to events: A CloudEvent informs what resource to process.
    • Supports different types of brokers via hyperfleet-broker lib.
  • Param phase: Gets parameters from environment, event and current resource status (by querying HyperFleet API)
  • Decision phase: Computes where an action has to be performed to a resource using the params
  • Resource phase: Creates resources using a configured client
    • Kubernetes client: local or remote cluster
    • Maestro client: remote cluster via Maestro server
  • Status reporting: Reports result of task execution to HyperFleet API
    • Builds the payload evaluating the status of the resources created in the resource phase

Prerequisites

  • Go 1.24.6 or later
  • Docker (for building Docker images)
  • Kubernetes 1.19+ (for deployment)
  • Helm 3.0+ (for Helm chart deployment)
  • golangci-lint (for linting, optional)

Getting Started

Clone the Repository

git clone https://github.com/openshift-hyperfleet/hyperfleet-adapter.git
cd hyperfleet-adapter

Install Dependencies

make mod-tidy

Build

# Build the binary
make build

# The binary will be created at: bin/hyperfleet-adapter

Run Tests

# Run unit tests
make test

# Run integration tests (pre-built envtest - unprivileged, CI/CD friendly)
make test-integration

# Run integration tests with K3s (faster, may need privileges)
make test-integration-k3s

# Run all tests
make test-all

Linting

# Run linter
make lint

# Format code
make fmt

Development

Project Structure

hyperfleet-adapter/
├── cmd/
│   └── adapter/          # Main application entry point
├── pkg/
│   ├── errors/           # Error handling utilities
│   └── logger/           # Structured logging with context support
├── internal/
│   ├── broker_consumer/  # Message broker consumer implementations
│   ├── config_loader/    # Configuration loading logic
│   ├── criteria/         # Precondition and CEL evaluation
│   ├── executor/         # Event execution engine
│   ├── hyperfleet_api/   # HyperFleet API client
│   └── k8s_client/       # Kubernetes client wrapper
├── test/                 # Integration tests
├── charts/               # Helm chart for Kubernetes deployment
├── Dockerfile            # Multi-stage Docker build
├── Makefile              # Build and test automation
├── go.mod                # Go module dependencies
└── README.md             # This file

Available Make Targets

Target Description
make build Build binary
make test Run unit tests
make test-integration Run integration tests with pre-built envtest (unprivileged, CI/CD friendly)
make test-integration-k3s Run integration tests with K3s (faster, may need privileges)
make test-all Run all tests (unit + integration)
make test-coverage Generate test coverage report
make lint Run golangci-lint
make image Build container image
make image-push Build and push container image to registry
make image-dev Build and push to personal Quay registry (requires QUAY_USER)
make fmt Format code
make mod-tidy Tidy Go module dependencies
make clean Clean build artifacts
make verify Run lint and test

💡 Tip: Use make help to see all available targets with descriptions

Tool Dependency Management (Bingo)

HyperFleet Adapter uses bingo to manage Go tool dependencies with pinned versions.

Managed tools:

  • goimports - Code formatting and import organization
  • golangci-lint - Code linting

Common operations:

# Install all tools
bingo get

# Install a specific tool
bingo get <tool>

# Update a tool to latest version
bingo get <tool>@latest

# List all managed tools
bingo list

Tool versions are tracked in .bingo/*.mod files and loaded automatically via include .bingo/Variables.mk in the Makefile.

Configuration

A HyperFleet Adapter requires several files for configuration:

  • Adapter config: Configures the adapter framework application
  • Adapter Task config: Configures the adapter task steps that will create resources
  • Broker configuration: Configures the specific broker to use by the adapter framework to receive CloudEvents

To see all configuration options read configuration.md file

Adapter configuration

The adapter deployment configuration (AdapterConfig) controls runtime and infrastructure settings for the adapter process, such as client connections, retries, and broker subscription details. It is loaded with Viper, so values can be overridden by CLI flags and environment variables in this priority order: CLI flags > env vars > file > defaults.

  • Path: HYPERFLEET_ADAPTER_CONFIG (required)
  • Common fields: spec.adapter.version, spec.debugConfig, spec.clients.* (HyperFleet API, Maestro, broker, Kubernetes)

Reference examples:

  • configs/adapter-deployment-config.yaml (full reference with env/flag notes)
  • charts/examples/adapter-config.yaml (minimal deployment example)

Adapter task configuration

The adapter task configuration (AdapterTaskConfig) defines the business logic for processing events: parameters, preconditions, resources to create, and post-actions. This file is loaded as static YAML (no Viper overrides) and is required at runtime.

  • Path: HYPERFLEET_TASK_CONFIG (required)
  • Key sections: spec.params, spec.preconditions, spec.resources, spec.post
  • Resource manifests: inline YAML or external file via manifest.ref

Reference examples:

  • charts/examples/adapter-task-config.yaml (worked example)
  • configs/adapter-task-config-template.yaml (complete schema reference)

Broker Configuration

Broker configuration is particular since responsibility is split between:

  • Hyperfleet broker library: configures the connection to a concrete broker (google pubsub, rabbitmq, ...)
    • Configured using a YAML file specified by the BROKER_CONFIG_FILE environment variable
  • Adapter: configures which topic/subscriptions to use on the broker
    • Configure topic/subscription in the adapter-config.yaml but can be overriden with env variables or cli params

See the Helm chart documentation for broker configuration options.

Deployment

Using Helm Chart

The project includes a Helm chart for Kubernetes deployment.

# Install the chart
helm install hyperfleet-adapter ./charts/

# Install with custom values
helm install hyperfleet-adapter ./charts/ -f ./charts/examples/values.yaml

# Upgrade deployment
helm upgrade hyperfleet-adapter ./charts/

# Uninstall
helm delete hyperfleet-adapter

For detailed Helm chart documentation, see charts/README.md.

Container Image

Build and push container images:

# Build container image
make image

# Build with custom tag
make image IMAGE_TAG=v1.0.0

# Build and push to default registry
make image-push

# Build and push to personal Quay registry (for development)
QUAY_USER=myuser make image-dev

Default image: quay.io/openshift-hyperfleet/hyperfleet-adapter:latest

The container build automatically embeds version metadata (version, git commit, build date) into the binary. The git commit is passed from the build machine via --build-arg GIT_COMMIT. To override:

make image GIT_COMMIT=abc1234

Testing

Unit Tests

# Run unit tests (fast, no dependencies)
make test

Unit tests include:

  • Logger functionality and context handling
  • Error handling and error codes
  • Operation ID middleware
  • Template rendering and parsing
  • Kubernetes client logic

Integration Tests

Integration tests use Testcontainers with dynamically installed envtest - works in any CI/CD platform without requiring privileged containers.

Click to expand: Setup and run integration tests

Prerequisites

  • Docker or Podman must be running (both fully supported!)
    • Docker: docker info
    • Podman: podman info
  • The Makefile automatically detects and configures your container runtime
  • Podman users: Corporate proxy settings are auto-detected from Podman machine

Run Tests

# Run integration tests with pre-built envtest (default - unprivileged)
make test-integration

# Run integration tests with K3s (faster, may need privileges)
make test-integration-k3s

# Run all tests (unit + integration)
make test-all

# Generate coverage report
make test-coverage

The first run will download golang:alpine and install envtest (~20-30 seconds). Subsequent runs are faster with caching.

Advantages

  • Simple Setup: Just needs Docker/Podman (no binary installation, no custom Dockerfile)
  • Unprivileged: Works in ANY CI/CD platform (OpenShift, Tekton, restricted runners)
  • Real API: Kubernetes API server + etcd (sufficient for most integration tests)
  • Podman Optimized: Auto-detects proxy, works in corporate networks
  • CI/CD Ready: No privileged mode required
  • Isolated: Fresh environment for each test suite

Performance: ~30-40 seconds for complete test suite (10 suites, 24 test cases).

Alternative: Use K3s (make test-integration-k3s) for 2x faster tests if privileged containers are available.

  • ⚠️ Requires Docker or rootful Podman
  • ✅ Makefile automatically checks Podman mode and provides helpful instructions if incompatible

📖 Full guide: test/integration/k8s_client/README.md

Test Coverage

# Generate coverage report
make test-coverage

# Generate HTML coverage report
make test-coverage-html

Expected Total Coverage: ~65-75% (unit + integration tests)

📊 Test Status: See TEST_STATUS.md for detailed coverage analysis

Logging

The adapter uses structured logging with context-aware fields:

  • Transaction ID (txid): Request transaction identifier
  • Operation ID (opid): Unique operation identifier
  • Adapter ID (adapter_id): Adapter instance identifier
  • Cluster ID (cluster_id): Cluster identifier

Logs are formatted with prefixes like: [opid=abc123][adapter_id=adapter-1] message

Error Handling

The adapter uses a structured error handling system:

  • Error Codes: Standardized error codes with prefixes
  • Error References: API references for error documentation
  • Error Types: Common error types (NotFound, Validation, Conflict, etc.)

See pkg/errors/error.go for error handling implementation.

Contributing

Welcome contributions! Please see CONTRIBUTING.md for guidelines on:

  • Code style and standards
  • Testing requirements
  • Pull request process
  • Commit message guidelines

Repository Access

All members of the hyperfleet team have write access to this repository.

Steps to Apply for Repository Access

If you're a team member and need access to this repository:

  1. Verify Organization Membership: Ensure you're a member of the openshift-hyperfleet organization
  2. Check Team Assignment: Confirm you're added to the hyperfleet team within the organization
  3. Repository Permissions: All hyperfleet team members automatically receive write access
  4. OWNERS File: Code reviews and approvals are managed through the OWNERS file

For access issues, contact a repository administrator or organization owner.

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Related Documentation

Support

For issues, questions, or contributions, please open an issue on GitHub.

About

HyperFleet Adapter Framework - Event-driven adapter services for HyperFleet cluster provisioning. Handles CloudEvents consumption, AdapterConfig CRD integration, precondition evaluation, Kubernetes Job creation/monitoring, and status reporting via API. Supports GCP Pub/Sub, RabbitMQ broker abstraction.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 10