Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions product_docs/docs/postgres_for_kubernetes/1/backup.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ for guidance.
!!!info Important
Starting with version 1.26, native backup and recovery capabilities are
being **progressively phased out** of the core operator and moved to official
CNP-I plugins. This transition aligns with {{name.ln}}' shift towards a
CNPG-I plugins. This transition aligns with {{name.ln}}' shift towards a
**backup-agnostic architecture**, enabled by its extensible
interface—**CNP-I**—which standardizes the management of **WAL archiving**,
interface—**CNPG-I**—which standardizes the management of **WAL archiving**,
**physical base backups**, and corresponding **recovery processes**.
!!!

Expand Down Expand Up @@ -58,7 +58,7 @@ up of the following resources:
- **Physical base backups**: a copy of all the files that PostgreSQL uses to
store the data in the database (primarily the `PGDATA` and any tablespace)

CNP-I provides a generic and extensible interface for managing WAL archiving
CNPG-I provides a generic and extensible interface for managing WAL archiving
(both archive and restore operations), as well as the base backup and
corresponding restore processes.

Expand Down Expand Up @@ -130,7 +130,7 @@ for your disaster recovery plans.
Kubernetes CSI interface and supported storage classes

!!!info Important
CNP-I is designed to enable third parties to build and integrate their own
CNPG-I is designed to enable third parties to build and integrate their own
backup plugins. Over time, we expect the ecosystem of supported backup
solutions to grow.
!!!
Expand Down Expand Up @@ -267,7 +267,7 @@ spec:
immediate: true
```

\### Pause Scheduled Backups
### Pause Scheduled Backups

To temporarily stop scheduled backups from running:

Expand All @@ -276,7 +276,7 @@ spec:
suspend: true
```

\### Backup Owner Reference (`.spec.backupOwnerReference`)
### Backup Owner Reference (`.spec.backupOwnerReference`)

Controls which Kubernetes object is set as the owner of the backup resource:

Expand Down Expand Up @@ -374,7 +374,7 @@ your broader Kubernetes cluster backup strategy.
{{name.ln}} currently supports the following backup methods for scheduled
and on-demand backups:

- `plugin` – Uses a CNP-I plugin (requires `.spec.pluginConfiguration`)
- `plugin` – Uses a CNPG-I plugin (requires `.spec.pluginConfiguration`)
- `volumeSnapshot` – Uses native [Kubernetes volume snapshots](backup_volumesnapshot.md#how-to-configure-volume-snapshot-backups)
- `barmanObjectStore` – Uses [Barman Cloud for object storage](backup_barmanobjectstore.md)
*(deprecated starting with v1.26 in favor of the
Expand Down Expand Up @@ -484,7 +484,7 @@ backup will be taken from the primary instance.
## Retention Policies

{{name.ln}} is evolving toward a **backup-agnostic architecture**, where
backup responsibilities are delegated to external **CNP-I plugins**. These
backup responsibilities are delegated to external **CNPG-I plugins**. These
plugins are expected to offer advanced and customizable data protection
features, including sophisticated retention management, that go beyond the
built-in capabilities and scope of {{name.ln}}.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: 'CNP-I'
originalFilePath: 'src/cnp_i.md'
title: 'CNPG-I'
originalFilePath: 'src/cnpg_i.md'
---


Expand All @@ -9,32 +9,32 @@ The **CloudNativePG Interface** ([CNPG-I](https://github.com/cloudnative-pg/cnpg
is a standard way to extend and customize {{name.ln}} without modifying its
core codebase.

## Why CNP-I?
## Why CNPG-I?

{{name.ln}} supports a wide range of use cases, but sometimes its built-in
functionality isn’t enough, or adding certain features directly to the main
project isn’t practical.

Before CNP-I, users had two main options:
Before CNPG-I, users had two main options:

- Fork the project to add custom behavior, or
- Extend the upstream codebase by writing custom components on top of it.

Both approaches created maintenance overhead, slowed upgrades, and delayed delivery of critical features.

CNP-I solves these problems by providing a stable, gRPC-based integration
CNPG-I solves these problems by providing a stable, gRPC-based integration
point for extending {{name.ln}} at key points in a cluster’s lifecycle —such
as backups, recovery, and sub-resource reconciliation— without disrupting the
core project.

CNP-I can extend:
CNPG-I can extend:

- The operator, and/or
- The instance manager running inside PostgreSQL pods.

## Registering a plugin

CNP-I is inspired by the Kubernetes
CNPG-I is inspired by the Kubernetes
[Container Storage Interface (CSI)](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/).
The operator communicates with registered plugins using **gRPC**, following the
[CNPG-I protocol](https://github.com/cloudnative-pg/cnpg-i/blob/main/docs/protocol.md).
Expand Down Expand Up @@ -198,7 +198,7 @@ must include this DNS name in its Subject Alternative Names (SAN).

To enable a plugin, configure the `.spec.plugins` section in your `Cluster`
resource. Refer to the {{name.ln}} API Reference for the full
[PluginConfiguration](https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-k8s-enterprisedb-io-v1-PluginConfiguration)
[PluginConfiguration](https://cloudnative-pg.io/docs/devel/cloudnative-pg.v1/#pluginconfiguration)
specification.

Example:
Expand Down Expand Up @@ -229,10 +229,10 @@ deployed:

## Community plugins

The CNP-I protocol has quickly become a proven and reliable pattern for
The CNPG-I protocol has quickly become a proven and reliable pattern for
extending {{name.ln}} while keeping the core project maintainable.
Over time, the community has built and shared plugins that address real-world
needs and serve as examples for developers.

For a complete and up-to-date list of plugins built with CNP-I, please refer to the
For a complete and up-to-date list of plugins built with CNPG-I, please refer to the
[CNPG-I GitHub page](https://github.com/cloudnative-pg/cnpg-i?tab=readme-ov-file#projects-built-with-cnpg-i).
Original file line number Diff line number Diff line change
Expand Up @@ -219,15 +219,37 @@ replicate similar behavior to the default setup.

## Pod templates

You can take advantage of pod templates specification in the `template`
section of a `Pooler` resource. For details, see
[`PoolerSpec`](pg4k.v1.md#poolerspec) in the API reference.
The `Pooler` resource allows you to customize the underlying pods via the
`template` section. This provides full access to the Kubernetes `PodSpec` for
advanced configurations like scheduling constraints, custom security contexts,
or resource overrides.

Using templates, you can configure pods as you like, including fine control
over affinity and anti-affinity rules for pods and nodes. By default,
containers use images from `docker.enterprisedb.com/k8s/pgbouncer`.
For a complete list of supported fields, see the
[`PoolerSpec`](pg4k.v1.md#poolerspec) API reference.

This example shows `Pooler` specifying \`PodAntiAffinity\`\`:
### Key requirements

- **The `pgbouncer` container name:** When overriding container settings (like
images or resources), the name of the container **must** be set to
`pgbouncer`. The operator looks for this specific name to manage the
PgBouncer process.

- **Mandatory `containers` field:** Since `template` follows the standard
Kubernetes `PodSpec` schema, the `containers` field is mandatory.

- If you aren't modifying container-level settings, you must set it to an empty
array: `containers: []`.

- If the `containers` field is missing, the API server will throw a
`ValidationError`.

### Examples

#### High availability with pod anti-affinity

This configuration uses `podAntiAffinity` to ensure that PgBouncer pods are
distributed across different nodes, preventing a single node failure from
taking down the entire pool.

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
Expand Down Expand Up @@ -258,16 +280,10 @@ spec:
topologyKey: "kubernetes.io/hostname"
```

!!!note
Explicitly set `.spec.template.spec.containers` to `[]` when not modified,
as it's a required field for a `PodSpec`. If `.spec.template.spec.containers`
isn't set, the Kubernetes api-server returns the following error when trying to
apply the manifest:`error validating "pooler.yaml": error validating data:
ValidationError(Pooler.spec.template.spec): missing required field
"containers"`
!!!
#### Custom image and resource limits

This example sets resources and changes the used image:
You can specify a custom image and define resource requests/limits. Note that
the container name is explicitly set to `pgbouncer`.

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
Expand All @@ -286,6 +302,7 @@ spec:
app: pooler
spec:
containers:
# This name MUST be "pgbouncer"
- name: pgbouncer
image: my-pgbouncer:latest
resources:
Expand Down Expand Up @@ -648,9 +665,10 @@ spec:

### Deprecation of Automatic `PodMonitor` Creation

!!!warning "Feature Deprecation Notice"
The `.spec.monitoring.enablePodMonitor` field in the `Pooler` resource is
now deprecated and will be removed in a future version of the operator.
!!!warning Feature Deprecation Notice
The `.spec.monitoring.enablePodMonitor` field in the `Pooler` resource is
now deprecated and will be removed in a future version of the operator.
!!!

If you are currently using this feature, we strongly recommend you either
remove or set `.spec.monitoring.enablePodMonitor` to `false` and manually
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -173,6 +173,9 @@ data:
, COALESCE(CAST(CAST('x'||pg_catalog.right(pg_catalog.split_part(last_failed_wal, '.', 1), 16) AS pg_catalog.bit(64)) AS pg_catalog.int8), -1) AS last_failed_wal_start_lsn
, EXTRACT(EPOCH FROM stats_reset) AS stats_reset_time
FROM pg_catalog.pg_stat_archiver
predicate_query: |
SELECT NOT pg_catalog.pg_is_in_recovery()
OR pg_catalog.current_setting('archive_mode') = 'always'
metrics:
- archived_count:
usage: "COUNTER"
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -135,9 +135,17 @@ spec:

The `name` field is **mandatory** and **must be unique within the cluster**, as
it determines the mount path (`/extensions/foo` in this example). It must
consist of *lowercase alphanumeric characters or hyphens (`-`)* and must start
consist of *lowercase alphanumeric characters, underscores (`_`) or hyphens (`-`)* and must start
and end with an alphanumeric character.

!!!note
Extension names containing underscores (e.g., `pg_ivm`) are converted to use
hyphens (e.g., `pg-ivm`) for Kubernetes volume names to comply with RFC 1123
DNS label requirements. Do not use extension names that become identical after
sanitization (e.g., `pg_ivm` and `pg-ivm` both sanitize to `pg-ivm`). The
webhook validation will prevent such conflicts.
!!!

The `image` stanza follows the [Kubernetes `ImageVolume` API](https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/).
The `reference` must point to a valid container registry path for the extension
image.
Expand Down
18 changes: 6 additions & 12 deletions product_docs/docs/postgres_for_kubernetes/1/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: The {{name.ln}} operator is a fork based on CloudNativePG™ which
originalFilePath: src/index.md
indexCards: none
directoryDefaults:
version: "1.28.0"
version: "1.28.1"
redirects:
- /postgres_for_kubernetes/preview/:splat
navigation:
Expand Down Expand Up @@ -144,19 +144,13 @@ users can expect a **"Level V - Auto Pilot"** set of capabilities from the

### Long Term Support

EDB is committed to declaring a Long Term Support (LTS) version of {{name.ln}} annually. 1.25 is the current LTS version. 1.18 was the
first. Each LTS version will
receive maintenance releases and be supported for an additional 12 months beyond
the last community release of CloudNativePG for the same version.
EDB is committed to declaring a Long Term Support (LTS) version of {{name.ln}} annually. 1.28 is the current LTS version. Each LTS version will
receive maintenance releases and be supported for an additional 12 months beyond the standard 6 months for a total of 18 months from the initial release.

For example, the 1.22 release of CloudNativePG reached End-of-Life on July
24, 2024, for the open source community.
Because it was declared an LTS version of {{name.ln}}, it
will be supported for an additional 12 months, until July 24, 2025.
For example, v1.25 of {{name.ln}} was released in December, 2024.
Because it was declared an LTS version, it will be supported for a total of 18 months, until June, 2026.

In addition, customers will always have at least 6 months to move between LTS versions.
For example, the 1.25 LTS version was released on December 23 2024, giving ample
time to users to migrate from the 1.22 LTS ahead of the End-of-life on July 2025.
For a list of currently supported versions and their statuses, see: [Platform Compatibility](https://www.enterprisedb.com/resources/platform-compatibility#edb%20postgres%C2%AE%20ai%20for%20cloudnativepg%E2%84%A2%20cluster).

While we encourage customers to regularly upgrade to the latest version of the operator to take
advantage of new features, having LTS versions allows customers desiring additional stability to stay on the same
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ for this minor release as follows:

```sh
kubectl apply --server-side -f \
https://get.enterprisedb.io/pg4k/pg4k-1.28.0.yaml
https://get.enterprisedb.io/pg4k/pg4k-1.28.1.yaml
```

You can verify that with:
Expand Down Expand Up @@ -323,7 +323,7 @@ Your applications will need to reconnect to PostgreSQL after the upgrade.

#### Deprecation of backup metrics and fields in the `Cluster` `.status`

With the transition to a backup and recovery agnostic approach based on CNP-I
With the transition to a backup and recovery agnostic approach based on CNPG-I
plugins in {{name.ln}}, which began with version 1.26.0 for Barman Cloud, we
are starting the deprecation period for the following fields in the `.status`
section of the `Cluster` resource:
Expand Down
21 changes: 16 additions & 5 deletions product_docs/docs/postgres_for_kubernetes/1/iron-bank.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -39,14 +39,20 @@ the image. From there, you can get the instruction to pull the image:

![pulling-ironbank-images](./images/ironbank/pulling-the-image.png)

For example, to pull the EPAS16 operand from Ironbank, you can run:
For example, to pull the EPAS 18 operand from Ironbank, you can run:

```bash
docker pull registry1.dso.mil/ironbank/enterprisedb/edb-postgres-advanced-16:16
docker pull registry1.dso.mil/ironbank/enterprisedb/edb-postgres-advanced-18:18
```

Similarly, for EPAS 16 or 17:

```bash
docker pull registry1.dso.mil/ironbank/enterprisedb/edb-postgres-advanced-17:17
docker pull registry1.dso.mil/ironbank/enterprisedb/edb-postgres-advanced-16:16
```

If you want to pick a more specific tag or use a specific SHA, you need to find it from the [Harbor page](https://registry1.dso.mil/harbor/projects/3/repositories/enterprisedb%2Fedb-postgres-advanced-16/artifacts-tab).
If you want to pick a more specific tag or use a specific SHA, you need to find it from the Harbor page (e.g., [EPAS 18](https://registry1.dso.mil/harbor/projects/3/repositories/enterprisedb%2Fedb-postgres-advanced-18/artifacts-tab), [EPAS 17](https://registry1.dso.mil/harbor/projects/3/repositories/enterprisedb%2Fedb-postgres-advanced-17/artifacts-tab), [EPAS 16](https://registry1.dso.mil/harbor/projects/3/repositories/enterprisedb%2Fedb-postgres-advanced-16/artifacts-tab)).

## Installing the {{name.short}} operator using the Iron Bank image

Expand Down Expand Up @@ -99,7 +105,7 @@ Once you have this in place, you can apply your manifest normally with
## Deploying clusters with EPAS operands using IronBank images

To deploy a cluster using the EPAS [operand](/postgres_for_kubernetes/latest/private_edb_registries/#operand-images) you must reference the Ironbank operand image appropriately in the `Cluster` resource YAML.
For example, to deploy a {{name.short}} Cluster using the EPAS 16 operand:
For example, to deploy a {{name.short}} Cluster using the EPAS 18 operand:

1. Create or edit a `Cluster` resource YAML file with the following content:

Expand All @@ -109,11 +115,16 @@ For example, to deploy a {{name.short}} Cluster using the EPAS 16 operand:
metadata:
name: cluster-example-full
spec:
imageName: registry1.dso.mil/ironbank/enterprisedb/edb-postgres-advanced-17:17
imageName: registry1.dso.mil/ironbank/enterprisedb/edb-postgres-advanced-18:18
imagePullSecrets:
- name: my_ironbank_secret
```

For EPAS 17 or 16, use the corresponding image:

- EPAS 17: `registry1.dso.mil/ironbank/enterprisedb/edb-postgres-advanced-17:17`
- EPAS 16: `registry1.dso.mil/ironbank/enterprisedb/edb-postgres-advanced-16:16`

2. Apply the YAML:

```
Expand Down
Loading
Loading