Spring Load Development is a project that demonstrates the integration of various modern technologies and frameworks to build a robust microservices architecture. This project showcases the use of Spring Boot for microservices, Spring WebFlux for reactive RESTful web services, Spring Data R2DBC for reactive database connectivity, Spring Cloud Config for externalized configuration management, Spring Cloud Gateway for API gateway, Resilience4J for circuit breaker patterns, and Spring AI for Model Context Protocol (MCP) server integration.
|
Note
|
Load development is used here as an example business domain to demonstrate microservices architecture and modern cloud-native patterns. In the context of competitive shooting, load development refers to the process of systematically testing and refining ammunition components (such as powder charge, bullet, primer, and case dimensions) to achieve optimal accuracy and performance for a specific firearm. This project manages and analyzes load development data as a sample use case, but the architecture and patterns are applicable to many other domains. |
-
Microservices using Spring Boot
-
Reactive RESTful Web Services using Spring WebFlux
-
Reactive Relational Database Connectivity using Spring Data R2DBC
-
Externalized Configuration Management using Spring Cloud Config
-
Service Discovery using Spring Cloud Netflix
-
Role-Based Access Control (RBAC) using Spring Security and Keycloak
-
API Gateway using Spring Cloud Gateway
-
Circuit Breaker using Spring Cloud Circuit Breaker and Resilience4J
-
AI Integration using Spring AI and Model Context Protocol (MCP)
-
Observability and Monitoring using OpenTelemetry (via Spring Boot starter), Tempo, Loki, Prometheus, and Grafana
-
Container Orchestration using Kubernetes and Helm
|
Note
|
All technologies are open source and widely adopted in cloud-native Java ecosystems. Version details can be found in the respective |
This project uses the following key versions:
-
Java: 25
-
Spring Boot: 4.0.1
-
Spring Cloud: 2025.1.0
-
Spring AI: 1.1.2
-
SpringDoc OpenAPI: 3.0.0
-
MapStruct: 1.6.3
-
OpenTelemetry Instrumentation: 2.23.0 (+ alpha)
-
Maven: 4.0.0-rc-5 or later
-
PostgreSQL: 18.1
-
Keycloak: 26.4
-
Grafana: 12.3.1
-
Loki: 3.6.3
-
Tempo: 2.9.0
-
Prometheus: v3.8.1
-
OpenTelemetry Collector: 0.142.0
-
Java 25 or later
-
Docker and Docker Compose
-
Maven 4.0+ (4.0.0-rc-4 or later)
-
Kubernetes cluster and kubectl (for Kubernetes deployment)
-
Helm 3.0+ (for Helm deployment)
The Spring Load Development application can be deployed in multiple ways:
-
Docker Compose - For local development and testing
-
Kubernetes with Helm - For production deployment
-
Manual Spring Boot - For development debugging
git clone https://github.com/zhoozhoo/spring-load-development.git
cd spring-load-development# Add and update helm repo (if using a helm repository)
helm repo update
# Create required namespaces first
cd helm/spring-load-development
./create-namespaces.sh
# Install the application
helm install spring-load-development . \
--values values.yaml \
--timeout 300s# Get service URLs
kubectl get services -n reloading
# For local development, port-forward to access services
kubectl port-forward -n reloading service/api-gateway 8080:8080For detailed Helm chart configuration, see Helm Chart Documentation.
docker-compose --env-file .env up -d postgres keycloak grafana loki tempo prometheus otel-collector# IMPORTANT: All services default to port 8080. To run them side-by-side locally, supply a distinct port with -Dserver.port.
# 1. Start Config Server (required) on 8888
java -Dserver.port=8888 -jar config-server/target/config-server-*.jar
# 2. Start Discovery Server on 8761
java -Dserver.port=8761 -jar discovery-server/target/discovery-server-*.jar
# 3. Start API Gateway on 8080 (external entrypoint)
java -jar api-gateway/target/api-gateway-*.jar
# 4. Start microservices (distinct ports)
java -Dserver.port=8081 -jar loads-service/target/loads-service-*.jar
java -Dserver.port=8082 -jar rifles-service/target/rifles-service-*.jar
java -Dserver.port=8083 -jar components-service/target/components-service-*.jar
java -Dserver.port=8084 -jar mcp-server/target/mcp-server-*.jar|
Tip
|
You can also override ports via environment variable SERVER_PORT or an application-local.yml profile.
|
Once the services are up and running, you can access them at the following URLs:
-
API Gateway: http://localhost:8080
-
Keycloak Admin Console: http://localhost:7080
-
Grafana Dashboard: http://localhost:3000
For Kubernetes deployment, services are accessible via NodePort or port-forwarding:
# Port-forward API Gateway
kubectl port-forward -n reloading service/api-gateway 8080:8080
# Port-forward Grafana
kubectl port-forward -n observability service/grafana-service 3000:3000
# Port-forward Keycloak
kubectl port-forward -n keycloak service/keycloak-service 7080:8080Alternatively, if using NodePort services, check the assigned ports:
kubectl get services -A | grep NodePortThe project includes an MCP (Model Context Protocol) server that provides AI-assisted tools for managing loads and rifles:
-
Integration with GitHub Copilot through the Model Context Protocol
-
AI-assisted load development analysis
-
Intelligent rifle configuration recommendations
-
Natural language queries for load data
To connect GitHub Copilot to the MCP server, configure the .vscode/mcp.json file in your project directory:
{
"servers": {
"reloading-mcp-server": {
"type": "sse",
"url": "http://localhost:8080/sse"
}
}
}API endpoints are documented using OpenAPI (Swagger). Once services are running, access the documentation at:
Alternatively, use the .http files in the top-level test/ directory with the VS Code REST Client extension for manual testing. (API Testing Guide)
The application is composed of the following services:
-
Config Server: Centralized configuration management for all services
-
Discovery Server: Service registry and discovery using Eureka
-
API Gateway: Routes and filters requests to appropriate services
-
Rifles Service: Manages rifle data and configurations
-
Loads Service: Handles load development data including groups and shots
-
Components Service: Manages reloading components (cases, propellants/powders, primers, projectiles/bullets) with full-text search capabilities
-
MCP Server: Provides AI-assisted tools via Model Context Protocol for loads and rifles management (MCP Server Guide)
-
Common Library: Shared DTOs, mappers, utilities (not a runtime service)
-
Integration Tests Module: End‑to‑end verification (build-time only)
The centralized configuration for all services is stored in a separate GitHub repository: https://github.com/zhoozhoo/spring-load-development-config
The Config Server automatically picks up configuration files from this repository at startup.
The application uses Keycloak for identity and access management with the following features:
-
Role-based access control (RBAC)
-
JWT token-based authentication
-
OAuth2/OpenID Connect integration
-
Predefined roles: RELOADER
-
Fine-grained permissions for loads and rifles management
The project includes a comprehensive observability stack with multiple components working together:
The project uses a modern observability stack with:
-
OpenTelemetry Collector: Centralized collection of telemetry data (traces, logs, metrics)
-
Tempo: Distributed tracing backend for storing and querying traces
-
Loki: Log aggregation system for centralized log storage and querying
-
Prometheus: Metrics collection and alerting
-
Grafana: Unified dashboards for visualizing metrics, traces, and logs
Access the observability components at: * Grafana (unified dashboards): http://localhost:3000 * Prometheus (metrics): http://localhost:9091
The diagram below shows the monitoring data flow used by this project. Services export metrics via the Actuator Prometheus endpoint and send telemetry (traces and logs) over OTLP to an OpenTelemetry Collector. The Collector routes traces to Tempo, logs to Loki and can optionally forward metrics or expose them for Prometheus. Prometheus scrapes the Actuator metrics endpoint; Grafana visualizes metrics, traces and logs; Alertmanager handles alerts from Prometheus.
flowchart LR
subgraph Services[Services / Applications]
S1["Microservice"]
end
subgraph OTEL[Observability Plane]
OC["OpenTelemetry Collector"]
Tempo["Tempo\n(Trace Store)"]
Loki["Loki\n(Log Store)"]
Prom["Prometheus"]
Graf["Grafana"]
end
%% Service -> telemetry
S1 -->|"OTLP (traces)"| OC
S1 -->|"OTLP (logs)"| OC
S1 -->|"OTLP (metrics)"| OC
%% Collector -> backends
OC -->|"traces"| Tempo
OC -->|"logs"| Loki
OC -->|"metrics"| Prom
%% Visualisation / alerting
Tempo -->|traces| Graf
Loki -->|logs| Graf
Prom -->|metrics| Graf
classDef serviceStyle fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000000
classDef infraStyle fill:#fafafa,stroke:#616161,stroke-width:2px,color:#000000
class S1 serviceStyle
class OC,Tempo,Loki,Prom,Graf infraStyle
This observability architecture shows how all telemetry data (traces, logs, and metrics) is centrally collected by the OpenTelemetry Collector and distributed to specialized backends: Tempo for traces, Loki for logs, and Prometheus for metrics. Grafana provides unified dashboards combining all three data types for comprehensive observability.
The Spring Load Development application supports two primary deployment models: Docker Compose for local development and Kubernetes with Helm for production deployments.
flowchart TB
subgraph "Docker Compose - Local Development"
subgraph "Application Services"
DC_GW[API Gateway<br/>:8080]
DC_CONFIG[Config Server]
DC_DISC[Discovery Server]
DC_LOADS[Loads Service]
DC_RIFLES[Rifles Service]
DC_COMP[Components Service]
DC_MCP[MCP Server]
end
subgraph "Infrastructure"
DC_PG[(PostgreSQL<br/>:5432)]
DC_KC[Keycloak<br/>:7080]
end
subgraph "Observability"
DC_GRAF[Grafana<br/>:3000]
DC_LOKI[Loki]
DC_TEMPO[Tempo]
DC_PROM[Prometheus]
DC_OTEL[OTEL Collector]
end
end
DC_GW --> DC_LOADS
DC_GW --> DC_RIFLES
DC_GW --> DC_COMP
DC_GW --> DC_MCP
DC_LOADS --> DC_PG
DC_RIFLES --> DC_PG
DC_COMP --> DC_PG
DC_MCP --> DC_PG
DC_GW --> DC_KC
DC_LOADS --> DC_OTEL
DC_RIFLES --> DC_OTEL
DC_COMP --> DC_OTEL
DC_MCP --> DC_OTEL
DC_OTEL --> DC_LOKI
DC_OTEL --> DC_TEMPO
DC_OTEL --> DC_PROM
DC_LOKI --> DC_GRAF
DC_TEMPO --> DC_GRAF
DC_PROM --> DC_GRAF
classDef appStyle fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000000
classDef infraStyle fill:#fff3e0,stroke:#f57c00,stroke-width:2px,color:#000000
classDef obsStyle fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:#000000
class DC_GW,DC_CONFIG,DC_DISC,DC_LOADS,DC_RIFLES,DC_COMP,DC_MCP appStyle
class DC_PG,DC_KC infraStyle
class DC_GRAF,DC_LOKI,DC_TEMPO,DC_PROM,DC_OTEL obsStyle
Docker Compose Benefits:
-
Single command deployment:
docker-compose up -d -
Automatic service networking and discovery
-
Easy local debugging with port mapping
-
Fast iteration during development
-
Minimal resource requirements
-
Simple configuration via
.envfile
flowchart TB
subgraph "Kubernetes Cluster"
subgraph "Namespace: reloading"
K8S_GW[API Gateway<br/>NodePort: 30090]
K8S_LOADS[Loads Service]
K8S_RIFLES[Rifles Service]
K8S_COMP[Components Service]
K8S_MCP[MCP Server]
end
subgraph "Namespace: postgres"
K8S_PG[(PostgreSQL StatefulSet<br/>8Gi PVC)]
end
subgraph "Namespace: keycloak"
K8S_KC_PG[(Keycloak PostgreSQL<br/>StatefulSet<br/>8Gi PVC)]
K8S_KC[Keycloak<br/>NodePort: 30080]
end
subgraph "Namespace: observability"
K8S_GRAF[Grafana<br/>NodePort: 30000<br/>5Gi PVC]
K8S_LOKI[Loki<br/>10Gi PVC]
K8S_TEMPO[Tempo<br/>10Gi PVC]
K8S_PROM[Prometheus<br/>10Gi PVC]
K8S_OTEL[OTEL Collector<br/>NodePort: 30317]
end
end
K8S_GW --> K8S_LOADS
K8S_GW --> K8S_RIFLES
K8S_GW --> K8S_COMP
K8S_GW --> K8S_MCP
K8S_LOADS --> K8S_PG
K8S_RIFLES --> K8S_PG
K8S_COMP --> K8S_PG
K8S_MCP --> K8S_PG
K8S_KC --> K8S_KC_PG
K8S_GW --> K8S_KC
K8S_LOADS --> K8S_OTEL
K8S_RIFLES --> K8S_OTEL
K8S_COMP --> K8S_OTEL
K8S_MCP --> K8S_OTEL
K8S_OTEL --> K8S_LOKI
K8S_OTEL --> K8S_TEMPO
K8S_OTEL --> K8S_PROM
K8S_LOKI --> K8S_GRAF
K8S_TEMPO --> K8S_GRAF
K8S_PROM --> K8S_GRAF
classDef appStyle fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000000
classDef infraStyle fill:#fff3e0,stroke:#f57c00,stroke-width:2px,color:#000000
classDef obsStyle fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:#000000
class K8S_GW,K8S_LOADS,K8S_RIFLES,K8S_COMP,K8S_MCP appStyle
class K8S_PG,K8S_KC_PG,K8S_KC infraStyle
class K8S_GRAF,K8S_LOKI,K8S_TEMPO,K8S_PROM,K8S_OTEL obsStyle
Kubernetes Benefits:
-
Production-ready with high availability support
-
Automatic scaling and self-healing
-
Persistent storage with StatefulSets
-
Namespace isolation for security
-
ConfigMaps and Secrets management
-
Rolling updates and rollbacks
-
Resource limits and health checks
-
Helm chart for easy management
| Aspect | Docker Compose | Kubernetes |
|---|---|---|
Deployment |
Single file, one command |
Helm chart with multiple resources |
Scaling |
Manual container scaling |
Horizontal pod autoscaling |
Networking |
Simple port mapping |
Services, NodePorts, Ingress |
Storage |
Docker volumes |
PersistentVolumeClaims, StatefulSets |
Configuration |
Environment variables, .env files |
ConfigMaps, Secrets |
Use Case |
Local development, testing |
Production, staging, multi-node |
Resource Overhead |
Minimal |
Requires cluster infrastructure |
Service Discovery |
Docker DNS |
Kubernetes DNS, Service discovery |
Updates |
Manual restart |
Rolling updates, zero downtime |
flowchart LR
subgraph Clients
User[User]
Copilot[GitHub Copilot]
end
APIGateway[API Gateway\n:8080]
subgraph Microservices
LoadsService[Loads Service]
RiflesService[Rifles Service]
ComponentsService[Components Service]
MCPServer[MCP Server]
end
subgraph Infrastructure
ConfigServer[Config Server\n:8888]
DiscoveryServer[Discovery Server\n:8761]
Keycloak[Keycloak Auth\n:7080]
end
subgraph Observability
OTEL[OTel Collector]
Prom[Prometheus]
Loki[Loki]
Tempo[Tempo]
Graf[Grafana]
end
Postgres[(PostgreSQL 18)]
User -->|REST| APIGateway
Copilot -->|MCP SSE| APIGateway
APIGateway --> LoadsService & RiflesService & ComponentsService & MCPServer
LoadsService --> Postgres
RiflesService --> Postgres
ComponentsService --> Postgres
MCPServer --> Postgres
Microservices -->|Config| ConfigServer
Microservices -->|Register| DiscoveryServer
Microservices -->|Auth| Keycloak
Microservices -->|Telemetry| OTEL
OTEL --> Prom & Loki & Tempo
Prom --> Graf
Loki --> Graf
Tempo --> Graf
AdminServer --> Microservices
classDef clientStyle fill:#e1f5fe,stroke:#0277bd,stroke-width:2px
classDef serviceStyle fill:#e8f5e8,stroke:#388e3c,stroke-width:2px
classDef infraStyle fill:#fafafa,stroke:#616161,stroke-width:2px
classDef dataStyle fill:#fff3e0,stroke:#f57c00,stroke-width:2px
classDef obsStyle fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
class User,Copilot clientStyle
class LoadsService,RiflesService,ComponentsService,MCPServer serviceStyle
class ConfigServer,DiscoveryServer,Keycloak,AdminServer infraStyle
class Postgres dataStyle
class OTEL,Prom,Loki,Tempo,Graf obsStyle
The application uses PostgreSQL 18.0 with the following schema design. The schema includes full-text search capabilities for component tables (bullets, powders, primers, cases) using PostgreSQL’s tsvector type, enabling efficient natural language searches across manufacturer names, types, and other attributes.
erDiagram
LOADS {
BIGSERIAL id PK
VARCHAR(255) owner_id NOT NULL
VARCHAR(255) name NOT NULL
TEXT description
VARCHAR(32) measurement_units NOT NULL "CHECK (Imperial, Metric)"
VARCHAR(255) powder_manufacturer NOT NULL
VARCHAR(255) powder_type NOT NULL
VARCHAR(255) bullet_manufacturer NOT NULL
VARCHAR(255) bullet_type NOT NULL
DOUBLE_PRECISION bullet_weight NOT NULL
VARCHAR(255) primer_manufacturer NOT NULL
VARCHAR(255) primer_type NOT NULL
DOUBLE_PRECISION distance_from_lands
DOUBLE_PRECISION case_overall_length
DOUBLE_PRECISION neck_tension
BIGSERIAL rifle_id FK
}
GROUPS {
BIGSERIAL id PK
VARCHAR(255) owner_id NOT NULL
BIGSERIAL load_id FK NOT NULL
DATE date NOT NULL
DOUBLE_PRECISION powder_charge NOT NULL
INTEGER target_range NOT NULL
DOUBLE_PRECISION group_size
}
SHOTS {
BIGSERIAL id PK
VARCHAR(255) owner_id NOT NULL
BIGSERIAL group_id FK NOT NULL
INTEGER velocity
}
RIFLES {
BIGSERIAL id PK
VARCHAR(255) owner_id NOT NULL
VARCHAR(255) name NOT NULL
TEXT description
VARCHAR(32) measurement_units NOT NULL "CHECK (Imperial, Metric)"
VARCHAR(32) caliber NOT NULL
DOUBLE_PRECISION barrel_length
VARCHAR(32) barrel_contour
VARCHAR(32) twist_rate
VARCHAR(32) rifling
DOUBLE_PRECISION free_bore
}
BULLETS {
BIGSERIAL id PK
VARCHAR(255) owner_id NOT NULL
VARCHAR(255) manufacturer NOT NULL
DOUBLE_PRECISION weight NOT NULL
VARCHAR(255) type NOT NULL
VARCHAR(255) measurement_units NOT NULL
DECIMAL(10) cost NOT NULL
VARCHAR(3) currency NOT NULL
INTEGER quantity_per_box NOT NULL
TSVECTOR search_vector
}
POWDERS {
BIGSERIAL id PK
VARCHAR(255) owner_id NOT NULL
VARCHAR(255) manufacturer NOT NULL
VARCHAR(255) type NOT NULL
VARCHAR(255) measurement_units NOT NULL
DECIMAL(10) cost
VARCHAR(3) currency
DOUBLE_PRECISION weight_per_container
TSVECTOR search_vector
}
PRIMERS {
BIGSERIAL id PK
VARCHAR(255) owner_id NOT NULL
VARCHAR(255) manufacturer NOT NULL
VARCHAR(255) type NOT NULL
VARCHAR(20) size NOT NULL
DECIMAL(10) cost NOT NULL
VARCHAR(3) currency NOT NULL
INTEGER quantity_per_box NOT NULL
TSVECTOR search_vector
}
CASES {
BIGSERIAL id PK
VARCHAR(255) owner_id NOT NULL
VARCHAR(255) manufacturer NOT NULL
VARCHAR(50) caliber NOT NULL
VARCHAR(20) primer_size NOT NULL
DECIMAL(10) cost NOT NULL
VARCHAR(3) currency NOT NULL
INTEGER quantity_per_box NOT NULL
TSVECTOR search_vector
}
%% Relationships
LOADS ||--o{ GROUPS : "has"
GROUPS ||--o{ SHOTS : "has"
RIFLES ||--o{ LOADS : "uses"
The Components Service provides full-text search functionality for all component types (cases, powders, primers, bullets). Each component table includes a search_vector column of type tsvector that is automatically maintained and indexed for fast searching.
Search Features:
-
Natural language queries across manufacturer names, types, and attributes
-
Fuzzy matching and relevance ranking
-
Case-insensitive searches
-
Support for partial word matching
-
PostgreSQL GIN indexes for optimal performance
Example Search Queries:
-
Find all Lapua cases:
GET /api/cases/search?query=Lapua -
Search for H4350 powder:
GET /api/propellants/search?query=H4350 -
Find 140 grain bullets:
GET /api/projectiles/search?query=140 -
Search for CCI primers:
GET /api/primers/search?query=CCI
For more details on search functionality and API usage, refer to the API Testing Guide.
-
❏ Update MCP server to support resources and prompts
-
❏ Add brass case attributes such as neck tension, headspace, etc.
-
❏ Implement load comparison and analysis tools
-
❏ Add shooting session tracking and analytics
-
❏ Enhance full-text search capabilities for loads and rifles
-
❏ Add ballistic coefficient calculations and external ballistics