+
+## Frequently asked questions
+
+
+Can I package any software or are there any prerequisites to be a Controller?
+
+We define a *Controller* as a software that has at least one Custom Resource Definition (CRD) and a Kubernetes controller for that CRD. This is the minimum requirement to be a *Controller*. We have some checks to enforce this at packaging time.
+
+
+
+
+How can I package my software as a Controller?
+
+Currently, we support Helm charts as the underlying package format for *Controllers*. As long as you have a Helm chart, you can package it as a *Controller*.
+
+If you don't have a Helm chart, you can't deploy the software. We only support Helm charts as the underlying package format for *Controllers*. We may extend this to support other packaging formats like Kustomize in the future.
+
+
+
+
+Can I package Crossplane XRDs/Compositions as a Helm chart to deploy as a Controller?
+
+This is not recommended. For packaging Crossplane XRDs/ and Compositions, we recommend using the `Configuration` package format. A helm chart only with Crossplane XRDs/Compositions does not qualify as a *Controller*.
+
+
+
+
+How can I override the Helm values when deploying a Controller?
+
+Overriding the Helm values is possible at two levels:
+- During packaging time, in the package manifest file.
+- At runtime, using a `ControllerRuntimeConfig` resource (similar to Crossplane `DeploymentRuntimeConfig`).
+
+
+
+
+How can I configure the helm release name and namespace for the controller?
+
+Right now, it is not possible to configure this at runtime. The package author configures release name and namespace during packaging, so it is hardcoded inside the package. Unlike a regular application that is deployed by a Helm chart, *Controllers* can only be deployed once in a given control plane, so, we hope it should be ok to rely on predefined release names and namespaces. We may consider exposing these in `ControllerRuntimeConfig` later, but, we would like to keep it opinionated unless there are strong reasons to do so.
+
+
+
+
+Can I deploy more than one instance of a Controller package?
+
+No, this is not possible. Remember, a *Controller* package introduces CRDs which are cluster-scoped objects. Just like one cannot deploy more than one instance of the same Crossplane Provider package today, it is not possible to deploy more than one instance of a *Controller*.
+
+
+
+
+Do I need a specific Crossplane version to run Controllers?
+
+Yes, you need to use Crossplane v1.19.0 or later to use *Controllers*. This is because of the changes in the Crossplane codebase to support third-party package formats in dependencies.
+
+Spaces `v1.12.0` supports Crossplane `v1.19` in the *Rapid* release channel.
+
+
+
+
+Can I deploy Controllers outside of an Upbound control plane? With UXP?
+
+No, *Controllers* are a proprietary package format and are only available for control planes running in Spaces hosting environments in Upbound.
+
+
+
+
+[cli]: /manuals/uxp/overview
+
diff --git a/spaces_versioned_docs/version-1.13/howtos/self-hosted/ctp-audit-logs.md b/spaces_versioned_docs/version-1.13/howtos/self-hosted/ctp-audit-logs.md
new file mode 100644
index 000000000..e387b2873
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/ctp-audit-logs.md
@@ -0,0 +1,544 @@
+---
+title: Control plane audit logging
+---
+
+This guide explains how to enable and configure audit logging for control planes
+in Self-Hosted Upbound Spaces.
+
+Starting in Spaces `v1.14.0`, each control plane contains an API server that
+supports audit log collection. You can use audit logging to track creation,
+updates, and deletions of Crossplane resources. Control plane audit logs
+use observability features to collect audit logs with `SharedTelemetryConfig` and
+send logs to an OpenTelemetry (`OTEL`) collector.
+
+
+## Prerequisites
+
+Before you begin, make sure you have:
+
+* Spaces `v1.14.0` or greater
+* Admin access to your Spaces host cluster
+* `kubectl` configured to access the host cluster
+* `helm` installed
+* `yq` installed
+* `up` CLI installed and logged in to your organization
+
+## Enable observability
+
+
+Observability graduated to General Available in `v1.14.0` but is disabled by
+default.
+
+
+
+
+
+### Before `v1.14`
+To enable the GA Observability feature, upgrade your Spaces installation to `v1.14.0`
+or later and update your installation setting to the new flag:
+
+```diff
+helm upgrade spaces upbound/spaces -n upbound-system \
+- --set "features.alpha.observability.enabled=true"
++ --set "observability.enabled=true"
+```
+
+
+
+### After `v1.14`
+
+To enable the GA Observability feature for `v1.14.0` and later, pass the feature
+flag:
+
+```sh
+helm upgrade spaces upbound/spaces -n upbound-system \
+ --set "observability.enabled=true"
+
+```
+
+
+
+
+To confirm Observability is enabled, run the `helm get values` command:
+
+
+```shell
+helm get values --namespace upbound-system spaces | yq .observability
+```
+
+Your output should return:
+
+```shell-noCopy
+ enabled: true
+```
+
+## Install an observability backend
+
+:::note
+If you already have an observability backend in your environment, skip to the
+next section.
+:::
+
+
+For this guide, you'll use Grafana's `docker-otel-lgtm` bundle to validate audit log
+generation. production environments, configure a dedicated observability
+backend like Datadog, Splunk, or an enterprise-grade Grafana stack.
+
+
+
+First, make sure your `kubectl` context points to your Spaces host cluster:
+
+```shell
+kubectl config current-context
+```
+
+The output should return your cluster name.
+
+Next, install `docker-otel-lgtm` as a deployment using port-forwarding to
+connect to Grafana. Create a manifest file and paste the
+following configuration:
+
+```yaml title="otel-lgtm.yaml"
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: observability
+---
+apiVersion: v1
+kind: Service
+metadata:
+ labels:
+ app: otel-lgtm
+ name: otel-lgtm
+ namespace: observability
+spec:
+ ports:
+ - name: grpc
+ port: 4317
+ protocol: TCP
+ targetPort: 4317
+ - name: http
+ port: 4318
+ protocol: TCP
+ targetPort: 4318
+ - name: grafana
+ port: 3000
+ protocol: TCP
+ targetPort: 3000
+ selector:
+ app: otel-lgtm
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: otel-lgtm
+ labels:
+ app: otel-lgtm
+ namespace: observability
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: otel-lgtm
+ template:
+ metadata:
+ labels:
+ app: otel-lgtm
+ spec:
+ containers:
+ - name: otel-lgtm
+ image: grafana/otel-lgtm
+ ports:
+ - containerPort: 4317
+ - containerPort: 4318
+ - containerPort: 3000
+```
+
+Next, apply the manifest:
+
+```shell
+kubectl apply --filename otel-lgtm.yaml
+```
+
+Your output should return the resources:
+
+```shell
+namespace/observability created
+ service/otel-lgtm created
+ deployment.apps/otel-lgtm created
+```
+
+To verify your resources deployed, use `kubectl get` to display resources with
+an `ACTIVE` or `READY` status.
+
+Next, forward the Grafana port:
+
+```shell
+kubectl port-forward svc/otel-lgtm --namespace observability 3000:3000
+```
+
+Now you can access the Grafana UI at http://localhost:3000.
+
+
+## Create an audit-enabled control plane
+
+To enable audit logging for a control plane, you need to label it so the
+`SharedTelemetryConfig` can identify and apply audit settings. This section
+creates a new control plane with the `audit-enabled: "true"` label. The
+`audit-enabled: "true"` label marks this control plane for audit logging. The
+`SharedTelemetryConfig` (created in the next section) finds control planes with
+this label and enables audit logging on them.
+
+Create a new manifest file and paste the configuration below:
+
+
+```yaml title="ctp-audit.yaml"
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: audit-test
+---
+apiVersion: spaces.upbound.io/v1beta1
+kind: ControlPlane
+metadata:
+ labels:
+ audit-enabled: "true"
+ name: ctp1
+ namespace: audit-test
+spec:
+ writeConnectionSecretToRef:
+ name: kubeconfig-ctp1
+ namespace: audit-test
+```
+
+
+The `metadata.labels` section contains the `audit-enabled` setting.
+
+Apply the manifest:
+
+```shell
+kubectl apply --filename ctp-audit.yaml
+```
+
+Confirm your control plane reaches the `READY` status:
+
+```shell
+kubectl get --filename ctp-audit.yaml
+```
+
+## Create a `SharedTelemetryConfig`
+
+The `SharedTelemetryConfig` applies to all control plane objects in a namespace
+and enables audit logging and routes logs to your `OTEL` endpoint.
+
+Create a `SharedTelemetryConfig` manifest file and paste the configuration
+below:
+
+
+```yaml title="sharedtelemetryconfig.yaml"
+apiVersion: observability.spaces.upbound.io/v1alpha1
+kind: SharedTelemetryConfig
+metadata:
+ name: apiserver-audit
+ namespace: audit-test
+spec:
+ apiServer:
+ audit:
+ enabled: true
+ exporters:
+ otlphttp:
+ endpoint: http://otel-lgtm.observability:4318
+ exportPipeline:
+ logs: [otlphttp]
+ controlPlaneSelector:
+ labelSelectors:
+ - matchLabels:
+ audit-enabled: "true"
+```
+
+
+This configuration:
+
+* Sets `apiServer.audit.enabled` to `true`
+* Configures the `otlphttp` exporter to point to the `docker-otel-lgtm` service
+* Uses `controlPlaneSelector` to match any control plane in the namespace with the `audit-enabled` label set to `true`
+
+:::note
+You can configure the `SharedTelemetryConfig` to select control planes in
+several ways. more information on control plane selection, see the [control
+plane selection][ctp-selection] documentation.
+:::
+
+Apply the `SharedTelemetryConfig`:
+
+```shell
+kubectl apply --filename sharedtelemetryconfig.yaml
+```
+
+Confirm the configuration selected the control plane:
+
+```shell
+kubectl get --filename sharedtelemetryconfig.yaml
+```
+
+The output should return `SELECTED` as `1` and `VALIDATED` as `TRUE`.
+
+For more detailed status information, use `kubectl get`:
+
+```shell
+kubectl get --filename sharedtelemetryconfig.yaml --output yaml | yq .status
+```
+
+## Generate and monitor audit events
+
+You enabled telemetry on your new control plane and can now generate events to
+test the audit logging. This guide uses the `nop-provider` to simulate resource
+operations.
+
+Switch your `up` context to the new control plane:
+
+```shell
+up ctx ///
+```
+
+Create a new Provider manifest:
+
+```yaml title="provider-nop.yaml"
+apiVersion: pkg.crossplane.io/v1
+ kind: Provider
+ metadata:
+ name: crossplane-contrib-provider-nop
+ spec:
+ package: xpkg.upbound.io/crossplane-contrib/provider-nop:v0.4.0
+```
+
+Apply the provider manifest:
+
+```shell
+kubectl apply --filename provider-nop.yaml
+```
+
+Verify the provider installed and returns `HEALTHY` status as `TRUE`.
+
+Apply an example resource to kick off event generation:
+
+
+```shell
+kubectl apply --filename https://raw.githubusercontent.com/crossplane-contrib/provider-nop/refs/heads/main/examples/nopresource.yaml
+```
+
+In your Grafana dashboard, navigate to **Drilldown** > **Logs** under the
+Grafana menu.
+
+
+Filter for `controlplane-audit` log messages.
+
+Create a query to find `create` events on `nopresources` by filtering:
+
+* The `verb` field for `create` events
+* The `objectRef_resource` field to match the Kind `nopresources`
+
+Review the audit log results. The log stream displays:
+
+*The client applying the create operation
+* The resource kind
+* Client details
+* The response code
+
+Expand the example below for an audit log entry:
+
+
+ Audit log entry
+
+```json
+{
+ "level": "Metadata",
+ "auditID": "51bbe609-14ad-4874-be78-1289c10d506a",
+ "stage": "ResponseComplete",
+ "requestURI": "/apis/nop.crossplane.io/v1alpha1/nopresources?fieldManager=kubectl-client-side-apply&fieldValidation=Strict",
+ "verb": "create",
+ "user": {
+ "username": "kubernetes-admin",
+ "groups": ["system:masters", "system:authenticated"]
+ },
+ "impersonatedUser": {
+ "username": "upbound:spaces:host:masterclient",
+ "groups": [
+ "system:authenticated",
+ "upbound:controlplane:admin",
+ "upbound:spaces:host:system:masters"
+ ]
+ },
+ "sourceIPs": ["10.244.0.135", "127.0.0.1"],
+ "userAgent": "kubectl/v1.32.2 (darwin/arm64) kubernetes/67a30c0",
+ "objectRef": {
+ "resource": "nopresources",
+ "name": "example",
+ "apiGroup": "nop.crossplane.io",
+ "apiVersion": "v1alpha1"
+ },
+ "responseStatus": { "metadata": {}, "code": 201 },
+ "requestReceivedTimestamp": "2025-09-19T23:03:24.540067Z",
+ "stageTimestamp": "2025-09-19T23:03:24.557583Z",
+ "annotations": {
+ "authorization.k8s.io/decision": "allow",
+ "authorization.k8s.io/reason": "RBAC: allowed by ClusterRoleBinding \"controlplane-admin\" of ClusterRole \"controlplane-admin\" to Group \"upbound:controlplane:admin\""
+ }
+ }
+```
+
+
+## Customize the audit policy
+
+Spaces `v1.14.0` includes a default audit policy. You can customize this policy
+by creating a configuration file and passing the values to
+`observability.collectors.apiServer.auditPolicy` in the helm values file.
+
+An example custom audit policy:
+
+```yaml
+observability:
+ controlPlanes:
+ apiServer:
+ auditPolicy: |
+ apiVersion: audit.k8s.io/v1
+ kind: Policy
+ rules:
+ # ============================================================================
+ # RULE 1: Exclude health check and version endpoints
+ # ============================================================================
+ - level: None
+ nonResourceURLs:
+ - '/healthz*'
+ - '/readyz*'
+ - /version
+ # ============================================================================
+ # RULE 2: ConfigMaps - Write operations only
+ # ============================================================================
+ - level: Metadata
+ resources:
+ - group: ""
+ resources:
+ - configmaps
+ verbs:
+ - create
+ - update
+ - patch
+ - delete
+ omitStages:
+ - RequestReceived
+ - ResponseStarted
+ # ============================================================================
+ # RULE 3: Secrets - ALL operations
+ # ============================================================================
+ - level: Metadata
+ resources:
+ - group: ""
+ resources:
+ - secrets
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - patch
+ - delete
+ omitStages:
+ - RequestReceived
+ - ResponseStarted
+ # ============================================================================
+ # RULE 4: Global exclusion of read-only operations
+ # ============================================================================
+ - level: None
+ verbs:
+ - get
+ - list
+ - watch
+ # ==========================================================================
+ # RULE 5: Exclude standard Kubernetes resources from write operation logging
+ # ==========================================================================
+ - level: None
+ resources:
+ - group: ""
+ - group: "apps"
+ - group: "networking.k8s.io"
+ - group: "policy"
+ - group: "rbac.authorization.k8s.io"
+ - group: "storage.k8s.io"
+ - group: "batch"
+ - group: "autoscaling"
+ - group: "metrics.k8s.io"
+ - group: "node.k8s.io"
+ - group: "scheduling.k8s.io"
+ - group: "coordination.k8s.io"
+ - group: "discovery.k8s.io"
+ - group: "events.k8s.io"
+ - group: "flowcontrol.apiserver.k8s.io"
+ - group: "internal.apiserver.k8s.io"
+ - group: "authentication.k8s.io"
+ - group: "authorization.k8s.io"
+ - group: "admissionregistration.k8s.io"
+ verbs:
+ - create
+ - update
+ - patch
+ - delete
+ # ============================================================================
+ # RULE 6: Catch-all for ALL custom resources and any missed resources
+ # ============================================================================
+ - level: Metadata
+ verbs:
+ - create
+ - update
+ - patch
+ - delete
+ omitStages:
+ - RequestReceived
+ - ResponseStarted
+ # ============================================================================
+ # RULE 7: Final catch-all - exclude everything else
+ # ============================================================================
+ - level: None
+ omitStages:
+ - RequestReceived
+ - ResponseStarted
+```
+You can apply this policy during Spaces installation or upgrade using the helm values file.
+
+Audit policies use rules evaluated in order from top to bottom where the first
+matching rule applies. Control plane audit policies follow Kubernetes conventions and use the
+following logging levels:
+
+* **None** - Don't log events matching this rule
+* **Metadata** - Log request metadata (user, timestamp, resource, verb) but not request or response bodies
+* **Request** - Log metadata and request body but not response body
+* **RequestResponse** - Log metadata, request body, and response body
+
+For more information, review the Kubernetes [Auditing] documentation.
+
+## Disable audit logging
+
+You can disable audit logging on a control plane by removing it from the
+`SharedTelemetryConfig` selector or by deleting the `SharedTelemetryConfig`.
+
+### Disable for specific control planes
+
+Remove the `audit-enabled` label from control planes that should stop sending audit logs:
+
+```bash
+kubectl label controlplane --namespace audit-enabled-
+```
+
+The `SharedTelemetryConfig` no longer selects this control plane, and audit log collection stops.
+
+### Disable for all control planes
+
+Delete the `SharedTelemetryConfig` to stop audit logging for all control planes it manages:
+
+```bash
+kubectl delete sharedtelemetryconfig --namespace
+```
+
+[ctp-selection]: /spaces/howtos/observability/#control-plane-selection
+[Auditing]: https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/
diff --git a/spaces_versioned_docs/version-1.13/howtos/self-hosted/declarative-ctps.md b/spaces_versioned_docs/version-1.13/howtos/self-hosted/declarative-ctps.md
new file mode 100644
index 000000000..12447b6fb
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/declarative-ctps.md
@@ -0,0 +1,105 @@
+---
+title: Declaratively create control planes
+sidebar_position: 99
+description: A tutorial to configure a Space with Argo to declaratively create and
+ manage control planes
+---
+
+In this tutorial, you learn how to configure [Argo CD][argo-cd] to communicate with a self-hosted Space. This flow allows you to declaratively create and manage control planes from Git. Argo CD is a continuous delivery tool for Kubernetes that you can use to drive GitOps flows for your control plane infrastructure.
+
+
+## Prerequisites
+
+To complete this tutorial, you need the following:
+
+- Have already deployed an Upbound Space.
+- Have already deployed an instance of Argo CD on a Kubernetes cluster.
+
+## Connect your Space to Argo CD
+
+Fetch the kubeconfig for the Space cluster, the Kubernetes cluster where you installed the Upbound Spaces software. You must add the Space cluster as a context to Argo.
+
+```ini
+export SPACES_CLUSTER_SERVER="https://url"
+export SPACES_CLUSTER_NAME="cluster"
+```
+
+Switch contexts to the Kubernetes cluster where you've installed Argo. Create a secret on the Argo cluster whose data contains the connection details of the Space cluster.
+
+:::important
+Make sure the following commands are executed against your **Argo** cluster, not your Space cluster.
+:::
+
+Run the following command in a terminal:
+
+```yaml
+cat <
+When you install a Crossplane provider on a control plane, memory gets consumed
+according to the number of custom resources it defines. Upbound [Official Provider families][official-provider-families] provide higher fidelity control
+to platform teams to install providers for only the resources they need,
+reducing the bloat of needlessly installing unused custom resources. Still, you
+must factor provider memory usage into your calculations to ensure you've
+rightsized the memory available in your Spaces cluster.
+
+
+:::important
+Be careful not to conflate `managed resource` with `custom resource definition`.
+The former is an "instance" of an external resource in Crossplane, while the
+latter defines the API schema of that resource.
+:::
+
+It's estimated that each custom resource definition consumes ~3 MB of memory.
+The calculation is:
+
+```bash
+number_of_managed_resources_defined_in_provider x 3 MB = memory_required
+```
+
+For example, if you plan to use [provider-aws-ec2][provider-aws-ec2], [provider-aws-s3][provider-aws-s3], and [provider-aws-iam][provider-aws-iam], the resulting calculation is:
+
+```bash
+provider-aws-ec2: 98 x 3 MB = 294 MB
+provider-aws-s3: 23 x 3 MB = 69 MB
+provider-aws-iam 22 x 3 MB = 66 MB
+---
+total memory: 429 MB
+```
+
+In this scenario, you should budget ~430 MB of memory for provider usage on this control plane.
+
+:::tip
+Do this calculation for each provider you plan to install on your control plane.
+Then do this calculation for each control plane you plan to run in your Space.
+:::
+
+
+#### Total memory usage
+
+Add the memory usage from the previous sections. Given the preceding examples,
+they result in a recommendation to budget ~1 GB memory for each control plane
+you plan to run in the Space.
+
+:::important
+
+The 1 GB recommendation is an example.
+You should input your own provider requirements to arrive at a final number for
+your own deployment.
+
+:::
+
+### CPU considerations
+
+#### Managed resource CPU usage
+
+The number of managed resources under management by a control plane is the largest contributing factor for CPU usage in a Space. CPU usage scales linearly according to the number of managed resources under management by your control plane. In Upbound's testing, CPU usage requirements _does_ vary from provider to provider. Using the Upbound Official Provider families as a baseline:
+
+
+| Provider | MR create operation (CPU core seconds) | MR update or reconciliation operation (CPU core seconds) |
+| ---- | ---- | ---- |
+| provider-family-aws | 10 | 2 to 3 |
+| provider-family-gcp | 7 | 1.5 |
+| provider-family-azure | 7 to 10 | 1.5 to 3 |
+
+
+When resources are in a non-ready state, Crossplane providers reconcile often (as fast as every 15 seconds). Once a resource reaches `READY`, each Crossplane provider defaults to a 10 minute poll interval. Given this, a 16-core machine has `16x10x60 = 9600` CPU core seconds available. Interpreting this table:
+
+- A single control plane that needs to create 100 AWS MRs concurrently would consume 1000 CPU core seconds, or about 1.5 cores.
+- A single control plane that continuously reconciles 100 AWS MRs once they've reached a `READY` state would consume 300 CPU core seconds, or a little under half a core.
+
+Since `provider-family-aws` has the highest recorded numbers for CPU time required, you can use that as an upper limit in your calculations.
+
+Using these calculations and extrapolating values, given a 16 core machine, it's recommended you don't exceed a single control plane managing 1000 MRs. Suppose you plan to run 10 control planes, each managing 1000 MRs. You want to make sure your node pool has capacity for 160 cores. If you are using a machine type that has 16 cores per machine, that would mean having a node pool of size 10. If you are using a machine type that has 32 cores per machine, that would mean having a node pool of size 5.
+
+#### Cloud API latency
+
+Oftentimes, you are using Crossplane providers to talk to external cloud APIs. Those external cloud APIs often have global API rate limits (examples: [Azure limits][azure-limits], [AWS EC2 limits][aws-ec2-limits]).
+
+For Crossplane providers built on [Upjet][upjet] (such as Upbound Official Provider families), these providers use Terraform under the covers. They expose some knobs (such as `--max-reconcile-rate`) you can use to tweak reconciliation rates.
+
+### Resource buffers
+
+The guidance in the preceding sections explains how to calculate CPU and memory usage requirements for:
+
+- a set of control planes in a Space
+- tuned to the number of providers you plan to use
+- according to the number of managed resource instances you plan to have managed by your control planes
+
+Upbound recommends budgeting an extra buffer of 20% to your resource capacity calculations. The numbers shared in the preceding sections don't account for peaks or surges since they're based off average measurements. Upbound recommends budgeting this buffer to account for these things.
+
+## Deploying more than one Space
+
+You are welcome to deploy more than one Space. You just need to make sure you have a 1:1 mapping of Space to Kubernetes clusters. Spaces are by their nature constrained to a single Kubernetes Cluster, which are regional entities. If you want to offer control planes in multiple cloud environments or multiple public clouds entirely, these are justifications for deploying >1 Spaces.
+
+## Cert-manager
+
+A Spaces deployment uses the [Certificate Custom Resource] from cert-manager to
+provision certificates within the Space. This establishes a nice API boundary
+between what your platform may need and the Certificate requirements of a
+Space.
+
+
+In the event you would like more control over the issuing Certificate Authority
+for your deployment or the deployment of cert-manager itself, this guide is for
+you.
+
+
+### Deploying
+
+An Upbound Space deployment doesn't have any special requirements for the
+cert-manager deployment itself. The only expectation is that cert-manager and
+the corresponding Custom Resources exist in the cluster.
+
+You should be free to install cert-manager in the cluster in any way that makes
+sense for your organization. You can find some [installation ideas] in the
+cert-manager docs.
+
+### Issuers
+
+A default Upbound Space install includes a [ClusterIssuer]. This `ClusterIssuer`
+is a `selfSigned` issuer that other certificates are minted from. You have a
+couple of options available to you for changing the default deployment of the
+Issuer:
+1. Changing the issuer name.
+2. Providing your own ClusterIssuer.
+
+
+#### Changing the issuer name
+
+The `ClusterIssuer` name is controlled by the `certificates.space.clusterIssuer`
+Helm property. You can adjust this during installation by providing the
+following parameter (assuming your new name is 'SpaceClusterIssuer'):
+```shell
+--set "certificates.space.clusterIssuer=SpaceClusterIssuer"
+```
+
+
+
+#### Providing your own ClusterIssuer
+
+To provide your own `ClusterIssuer`, you need to first setup your own
+`ClusterIssuer` in the cluster. The cert-manager docs have a variety of options
+for providing your own. See the [Issuer Configuration] docs for more details.
+
+Once you have your own `ClusterIssuer` set up in the cluster, you need to turn
+off the deployment of the `ClusterIssuer` included in the Spaces deployment.
+To do that, provide the following parameter during installation:
+```shell
+--set "certificates.provision=false"
+```
+
+###### Considerations
+If your `ClusterIssuer` has a name that's different from the default name that
+the Spaces installation expects ('spaces-selfsigned'), you need to also specify
+your `ClusterIssuer` name during install using:
+```shell
+--set "certificates.space.clusterIssuer="
+```
+
+## Ingress
+
+To route requests from an external client (kubectl, ArgoCD, etc) to a
+control plane, a Spaces deployment includes a default [Ingress] manifest. In
+order to ease getting started scenarios, the current `Ingress` includes
+configurations (properties and annotations) that assume that you installed the
+commonly used [ingress-nginx ingress controller] in the cluster. This section
+walks you through using a different `Ingress`, if that's something that your
+organization needs.
+
+### Default manifest
+
+An example of what the current `Ingress` manifest included in a Spaces install
+is below:
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: mxe-router-ingress
+ namespace: upbound-system
+ annotations:
+ nginx.ingress.kubernetes.io/use-regex: "true"
+ nginx.ingress.kubernetes.io/ssl-redirect: "false"
+ nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
+ nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
+ nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
+ nginx.ingress.kubernetes.io/proxy-body-size: "0"
+ nginx.ingress.kubernetes.io/proxy-http-version: "1.1"
+ nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
+ nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
+ nginx.ingress.kubernetes.io/proxy-ssl-secret: "upbound-system/mxp-hostcluster-certs"
+ nginx.ingress.kubernetes.io/proxy-ssl-name: spaces-router
+ nginx.ingress.kubernetes.io/configuration-snippet: |
+ more_set_headers "X-Request-Id: $req_id";
+ more_set_headers "Request-Id: $req_id";
+ more_set_headers "Audit-Id: $req_id";
+spec:
+ ingressClassName: nginx
+ tls:
+ - hosts:
+ - {{ .Values.ingress.host }}
+ secretName: mxe-router-tls
+ rules:
+ - host: {{ .Values.ingress.host }}
+ http:
+ paths:
+ - path: "/v1/controlPlanes"
+ pathType: Prefix
+ backend:
+ service:
+ name: spaces-router
+ port:
+ name: http
+```
+
+The notable pieces are:
+1. Namespace
+
+
+
+This property represents the namespace that the spaces-router is deployed to.
+In most cases this is `upbound-system`.
+
+
+
+2. proxy-ssl-* annotations
+
+The spaces-router pod terminates TLS using certificates located in the
+mxp-hostcluster-certs `Secret` located in the `upbound-system` `Namespace`.
+
+3. proxy-* annotations
+
+Requests coming into the ingress-controller can be variable depending on what
+the client is requesting. For example, `kubectl get crds` has different
+requirements for the connection compared to a 'watch', for example
+`kubectl get pods -w`. The ingress-controller is configured to be able to
+account for either scenario.
+
+
+4. configuration-snippets
+
+These commands add headers to the incoming requests that help with telemetry
+and diagnosing problems within the system.
+
+5. Rules
+
+Requests coming into the control planes use a `/v1/controlPlanes` prefix and
+need to be routed to the spaces-router.
+
+
+### Using a different ingress manifest
+
+Operators can choose to use an `Ingress` manifest and ingress controller that
+makes the most sense for their organization. If they want to turn off deploying
+the default `Ingress` manifest, they can do so during installation by providing
+the following parameter during installation:
+```shell
+--set ".Values.ingress.provision=false"
+```
+
+#### Considerations
+
+
+
+
+
+Operators will need to take into account the following considerations when
+disabling the default `Ingress` deployment.
+
+1. Ensure the custom `Ingress` manifest is placed in the same namespace as the
+`spaces-router` pod.
+2. Ensure that the ingress is configured to use a `spaces-router` as a secure
+backend and that the secret used is the mxp-hostcluster-certs secret.
+3. Ensure that the ingress is configured to handle long-lived connections.
+4. Ensure that the routing rule sends requests prefixed with
+`/v1/controlPlanes` to the `spaces-router` using the `http` port.
+
+
+
+
+
+
+[cert-manager]: https://cert-manager.io/
+[Certificate Custom Resource]: https://cert-manager.io/docs/usage/certificate/
+[ClusterIssuer]: https://cert-manager.io/docs/concepts/issuer/
+[ingress-nginx ingress controller]: https://kubernetes.github.io/ingress-nginx/deploy/
+[installation ideas]: https://cert-manager.io/docs/installation/
+[Ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
+[Issuer Configuration]: https://cert-manager.io/docs/configuration/
+[official-provider-families]: /manuals/packages/providers/provider-families
+[aws-eks]: https://aws.amazon.com/eks/
+[google-cloud-gke]: https://cloud.google.com/kubernetes-engine
+[microsoft-aks]: https://azure.microsoft.com/en-us/products/kubernetes-service
+[upbound-account]: https://www.upbound.io/register/?utm_source=docs&utm_medium=cta&utm_campaign=docs_spaces
+[provider-aws-ec2]: https://marketplace.upbound.io/providers/upbound/provider-aws-ec2
+[provider-aws-s3]: https://marketplace.upbound.io/providers/upbound/provider-aws-s3
+[provider-aws-iam]: https://marketplace.upbound.io/providers/upbound/provider-aws-iam
+[azure-limits]: https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/request-limits-and-throttling
+[aws-ec2-limits]: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/throttling.html#throttling-limits-rate-based
+[upjet]: https://github.com/upbound/upjet
diff --git a/docs/manuals/spaces/howtos/self-hosted/dr.md b/spaces_versioned_docs/version-1.13/howtos/self-hosted/dr.md
similarity index 99%
rename from docs/manuals/spaces/howtos/self-hosted/dr.md
rename to spaces_versioned_docs/version-1.13/howtos/self-hosted/dr.md
index 6e9899d26..9f9b9c1f8 100644
--- a/docs/manuals/spaces/howtos/self-hosted/dr.md
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/dr.md
@@ -4,6 +4,7 @@ sidebar_position: 13
description: Configure Space-wide backups for disaster recovery.
---
+
:::important
For Connected and Disconnected Spaces, this feature requires Spaces `v1.9.0` and, starting with `v1.14.0`, Spaces enables it by default.
@@ -393,7 +394,7 @@ kubectl exec -ti -n upbound-system deployments/spaces-controller -c spaces
```
-[shared-backups]: /manuals/spaces/howtos/self-hosted/workload-id/backup-restore-config/
+[shared-backups]: /spaces/howtos/self-hosted/workload-id/backup-restore-config/
[spacebackupconfig]: /reference/apis/spaces-api/v1_9
[thanos-object-storage]: https://thanos.io/tip/thanos/storage.md/
[spacebackupschedule]: /reference/apis/spaces-api/v1_9
diff --git a/spaces_versioned_docs/version-1.13/howtos/self-hosted/gitops-with-argocd.md b/spaces_versioned_docs/version-1.13/howtos/self-hosted/gitops-with-argocd.md
new file mode 100644
index 000000000..004247a10
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/gitops-with-argocd.md
@@ -0,0 +1,142 @@
+---
+title: GitOps with ArgoCD in Self-Hosted Spaces
+sidebar_position: 80
+description: Set up GitOps workflows with Argo CD in self-hosted Spaces
+plan: "business"
+---
+
+:::info Deployment Model
+This guide applies to **self-hosted Spaces** deployments. For Upbound Cloud Spaces, see [GitOps with Upbound Control Planes](/spaces/howtos/cloud-spaces/gitops-on-upbound/).
+:::
+
+GitOps is an approach for managing a system by declaratively describing desired resources' configurations in Git and using controllers to realize the desired state. Upbound's control planes are compatible with this pattern and it's strongly recommended you integrate GitOps in the platforms you build on Upbound.
+
+
+## Integrate with Argo CD
+
+
+[Argo CD][argo-cd] is a project in the Kubernetes ecosystem commonly used for
+GitOps. You can use it in tandem with Upbound control planes to achieve GitOps
+flows. The sections below explain how to integrate these tools with Upbound.
+
+### Configure connection secrets for control planes
+
+You can configure control planes to write their connection details to a secret.
+Do this by setting the
+[`spec.writeConnectionSecretToRef`][spec-writeconnectionsecrettoref] field in a
+control plane manifest. For example:
+
+```yaml
+apiVersion: spaces.upbound.io/v1beta1
+kind: ControlPlane
+metadata:
+ name: ctp1
+ namespace: default
+spec:
+ writeConnectionSecretToRef:
+ name: kubeconfig-ctp1
+ namespace: default
+```
+
+
+### Configure Argo CD
+
+
+To configure Argo CD for Annotation resource tracking, edit the Argo CD
+ConfigMap in the Argo CD namespace. Add `application.resourceTrackingMethod:
+annotation` to the data section as below.
+
+Next, configure the [auto respect RBAC for the Argo CD
+controller][auto-respect-rbac-for-the-argo-cd-controller-1]. By default, Argo CD
+attempts to discover some Kubernetes resource types that don't exist in a
+control plane. You must configure Argo CD to respect the cluster's RBAC rules so
+that Argo CD can sync. Add `resource.respectRBAC: normal` to the data section as
+below.
+
+```bash
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: argocd-cm
+data:
+ ...
+ application.resourceTrackingMethod: annotation
+ resource.respectRBAC: normal
+```
+
+:::tip
+The `resource.respectRBAC` configuration above tells Argo to respect RBAC for
+_all_ cluster contexts. If you're using an Argo CD instance to manage more than
+only control planes, you should consider changing the `clusters` string match
+for the configuration to apply only to control planes. For example, if every
+control plane context name followed the convention of being named
+`controlplane-`, you could set the string match to be `controlplane-*`
+:::
+
+
+### Create a cluster context definition
+
+
+Once the control plane is ready, extract the following values from the secret
+containing the kubeconfig:
+
+```bash
+kubeconfig_content=$(kubectl get secrets kubeconfig-ctp1 -n default -o jsonpath='{.data.kubeconfig}' | base64 -d)
+server=$(echo "$kubeconfig_content" | grep 'server:' | awk '{print $2}')
+bearer_token=$(echo "$kubeconfig_content" | grep 'token:' | awk '{print $2}')
+ca_data=$(echo "$kubeconfig_content" | grep 'certificate-authority-data:' | awk '{print $2}')
+```
+
+Generate a new secret in the cluster where you installed Argo, using the prior
+values extracted:
+
+```yaml
+cat <
+
+import GlobalLanguageSelector, { CodeBlock } from '@site/src/components/GlobalLanguageSelector';
+
+
+
+
+:::important
+This feature is only available for select Business Critical customers. You can't
+set up your own Managed Space without the assistance of Upbound. If you're
+interested in this deployment mode, please [contact us][contact].
+:::
+
+
+
+A Managed Space deployed on AWS is a single-tenant deployment of a control plane
+space in your AWS organization in an isolated sub-account. With Managed Spaces,
+you can use the same API, CLI, and Console that Upbound offers, with the benefit
+of running entirely in a cloud account that you own and Upbound manages for you.
+
+The following guide walks you through setting up a Managed Space in your AWS
+organization. If you have any questions while working through this guide,
+contact your Upbound Account Representative for help.
+
+
+
+
+
+A Managed Space deployed on GCP is a single-tenant deployment of a control plane
+space in your GCP organization in an isolated project. With Managed Spaces, you
+can use the same API, CLI, and Console that Upbound offers, with the benefit of
+running entirely in a cloud account that you own and Upbound manages for you.
+
+The following guide walks you through setting up a Managed Space in your GCP
+organization. If you have any questions while working through this guide,
+contact your Upbound Account Representative for help.
+
+
+
+
+## Managed Space on your cloud architecture
+
+
+
+A Managed Space is a deployment of the Upbound Spaces software inside an
+Upbound-controlled sub-account in your AWS cloud environment. The Spaces
+software runs in this sub-account, orchestrated by Kubernetes. Backups and
+billing data get stored inside bucket or blob storage in the same sub-account.
+The control planes deployed and controlled by the Spaces software runs on the
+Kubernetes cluster which gets deployed into the sub-account.
+
+The diagram below illustrates the high-level architecture of Upbound Managed Spaces:
+
+
+
+The Spaces software gets deployed on an EKS Cluster in the region of your
+choice. This EKS cluster is where your control planes are ultimately run.
+Upbound also deploys buckets, 1 for the collection of the billing data and 1 for
+control plane backups.
+
+Upbound doesn't have access to other sub-accounts nor your organization-level
+settings in your cloud environment. Outside of your cloud organization, Upbound
+runs the Upbound Console, which includes the Upbound API and web application,
+including the dashboard you see at `console.upbound.io`. By default, all
+connections are encrypted, but public. Optionally, you also have the option to
+use private network connectivity through [AWS PrivateLink][aws-privatelink].
+
+
+
+
+
+
+A Managed Space is a deployment of the Upbound Spaces software inside an
+Upbound-controlled project in your GCP cloud environment. The Spaces software
+runs in this project, orchestrated by Kubernetes. Backups and billing data get
+stored inside bucket or blob storage in the same project. The control planes
+deployed and controlled by the Spaces software runs on the Kubernetes cluster
+which gets deployed into the project.
+
+The diagram below illustrates the high-level architecture of Upbound Managed Spaces:
+
+
+
+The Spaces software gets deployed on a GKE Cluster in the region of your choice.
+This GKE cluster is where your control planes are ultimately run. Upbound also
+deploys cloud buckets, 1 for the collection of the billing data and 1 for
+control plane backups.
+
+Upbound doesn't have access to other projects nor your organization-level
+settings in your cloud environment. Outside of your cloud organization, Upbound
+runs the Upbound Console, which includes the Upbound API and web application,
+including the dashboard you see at `console.upbound.io`. By default, all
+connections are encrypted, but public. Optionally, you also have the option to
+use private network connectivity through [GCP Private Service
+Connect][gcp-private-service-connect].
+
+
+
+## Prerequisites
+
+- An organization created on Upbound
+
+
+
+- You should have a preexisting AWS organization to complete this guide.
+- You must create a new AWS sub-account. Read the [AWS documentation][aws-documentation] to learn how to create a new sub-account in an existing organization on AWS.
+
+After the sub-account information gets provided to Upbound, **don't change it
+any further.** Any changes made to the sub-account or the resources created by
+Upbound for the purposes of the Managed Space deployments voids the SLA you have
+with Upbound. If you want to make configuration changes, contact your Upbound
+Solutions Architect.
+
+
+
+
+
+- You should have a preexisting GCP organization with an active Cloud Billing account to complete this guide.
+- You must create a new GCP project. Read the [GCP documentation][gcp-documentation] to learn how to create a new project in an existing organization on GCP.
+
+After the project information gets provided to Upbound, **don't change it any
+further.** Any changes made to the project or the resources created by Upbound
+for the purposes of the Managed Space deployments voids the SLA you have with
+Upbound. If you want to make configuration changes, contact your Upbound
+Solutions Architect.
+
+
+
+
+
+## Set up cross-account management
+
+Upbound supports using AWS Key Management Service with cross-account IAM
+permissions. This enables the isolation of keys so the infrastructure operated
+by Upbound has limited access to symmetric keys.
+
+In the KMS key's account, apply the baseline key policy:
+
+```json
+{
+ "Sid": "Allow Upbound to use this key",
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": ["[Managed Space sub-account ID]"]
+ },
+ "Action": ["kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey"],
+ "Resource": "*"
+}
+```
+
+You need another key policy to let the sub-account create persistent resources
+with the KMS key:
+
+```json
+{
+ "Sid": "Allow attachment of persistent resources for an Upbound Managed Space",
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": "[Managed Space sub-account ID]"
+ },
+ "Action": ["kms:CreateGrant", "kms:ListGrants", "kms:RevokeGrant"],
+ "Resource": "*",
+ "Condition": {
+ "Bool": {
+ "kms:GrantIsForAWSResource": "true"
+ }
+ }
+}
+```
+
+### Configure PrivateLink
+
+By default, all connections to the Upbound Console are encrypted, but public.
+AWS PrivateLink is a feature that allows VPC peering whereby your traffic
+doesn't traverse the public internet. To have this configured, contact your
+Upbound Account Representative.
+
+
+
+
+
+## Enable APIs
+
+Enable the following APIs in the new project:
+
+- Kubernetes Engine API
+- Cloud Resource Manager API
+- Compute Engine API
+- Cloud DNS API
+
+:::tip
+Read how to enable APIs in a GCP project [here][here].
+:::
+
+## Create a service account
+
+Create a service account in the new project. Name the service account,
+upbound-sa. Give the service account the following roles:
+
+- Compute Admin
+- Project IAM Admin
+- Service Account Admin
+- DNS Administrator
+- Editor
+
+Select the service account you just created. Select keys. Add a new key and
+select JSON. The key gets downloaded to your machine. Save this for later.
+
+## Create a DNS Zone
+
+Create a DNS Zone, set the **Zone type** to `Public`.
+
+### Configure Private Service Connect
+
+By default, all connections to the Upbound Console are encrypted, but public.
+GCP Private Service Connect is a feature that allows VPC peering whereby your
+traffic doesn't traverse the public internet. To have this configured, contact
+your Upbound Account Representative.
+
+
+
+## Provide information to Upbound
+
+Once these policies get attached to the key, tell your Upbound Account
+Representative, providing them the following:
+
+
+
+- the full ARN of the KMS key.
+- the name of the organization that you created in Upbound. Use the up CLI command, `up org list`, so see this information.
+- Confirmation of which region in AWS you want the deployment to target.
+
+
+
+
+
+- The service account JSON key
+- The NS records associated with the DNS name created in the last step.
+- the name of the organization that you created in Upbound. Use the up CLI command, `up org list`, so see this information.
+- Confirmation of which region in GCP you want the deployment to target.
+
+
+
+Once Upbound has this information, the request gets processed in a business day.
+
+## Use your Managed Space
+
+Once the Managed Space gets deployed, you can see it in the Space selector when browsing your environment on [`console.upbound.io`][console-upbound-io].
+
+
+
+
+[contact]: https://www.upbound.io/contact-us
+[aws-privatelink]: #configure-privatelink
+[aws-documentation]: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_create.html#orgs_manage_accounts_create-new
+[gcp-private-service-connect]: #configure-private-service-connect
+[gcp-documentation]: https://cloud.google.com/resource-manager/docs/creating-managing-organization
+[here]: https://cloud.google.com/apis/docs/getting-started#enabling_apis
+[console-upbound-io]: https://console.upbound.io/
diff --git a/spaces_versioned_docs/version-1.13/howtos/self-hosted/oidc-configuration.md b/spaces_versioned_docs/version-1.13/howtos/self-hosted/oidc-configuration.md
new file mode 100644
index 000000000..33f775422
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/oidc-configuration.md
@@ -0,0 +1,284 @@
+---
+title: Configure OIDC
+sidebar_position: 20
+description: Configure OIDC in your Space
+---
+:::important
+This guide is only applicable for administrators who've deployed self-hosted Spaces. general RBAC in Upbound, read [Upbound RBAC][upbound-rbac].
+:::
+
+Upbound uses the Kubernetes [Structured Authentication Configuration][structured-auth-config] to validate OIDC tokens sent to the API. Upbound stores this
+configuration as a `ConfigMap` and authenticates with the Upbound router
+component during installation with Helm.
+
+This guide walks you through how to create and apply an authentication
+configuration to validate Upbound with an external identity provider. Each
+section focuses on a specific part of the configuration file.
+
+
+## Creating the `AuthenticationConfiguration` file
+
+First, create a file called `config.yaml` with an `AuthenticationConfiguration`
+kind. The `AuthenticationConfiguration` is the initial authentication structure
+necessary for Upbound to communicate with your chosen identity provider.
+
+```yaml
+apiVersion: apiserver.config.k8s.io/v1beta1
+kind: AuthenticationConfiguration
+jwt:
+- issuer:
+ url: oidc-issuer-url
+ audiences:
+ - oidc-client-id
+ claimMappings: # optional
+ username:
+ claim: oidc-username-claim
+ prefix: oidc-username-prefix
+ groups:
+ claim: oidc-groups-claim
+ prefix: oidc-groups-prefix
+```
+
+
+For detailed configuration options, including the CEL-based token validation,
+review the feature [documentation][structured-auth-config].
+
+
+The `AuthenticationConfiguration` allows you to configure multiple JWT
+authenticators as separate issuers.
+
+### Configure an issuer
+
+The `jwt` array requires an `issuer` specification and typically contains:
+
+- A `username` claim mapping
+- A `groups` claim mapping
+Optionally, the configuration may also include:
+- A set of claim validation rules
+- A set of user validation rules
+
+The `issuer` URL must be unique across all configured authenticators.
+
+```yaml
+issuer:
+ url: https://example.com
+ discoveryUrl: https://discovery.example.com/.well-known/openid-configuration
+ certificateAuthority: |-
+
+ audiences:
+ - client-id-a
+ - client-id-b
+ audienceMatchPolicy: MatchAny
+```
+
+By default, the authenticator assumes the OIDC Discovery URL is
+`{issuer.url}/.well-known/openid-configuration`. Most identity providers follow
+this structure, and you can omit the `discoveryUrl` field. To use a separate
+discovery service, specify the full path to the discovery endpoint in this
+field.
+
+If the CA for the Issuer isn't public, provide the PEM encoded CA for the Discovery URL.
+
+At least one of the `audiences` entries must match the `aud` claim in the JWT.
+For OIDC tokens, this is the Client ID of the application attempting to access
+the Upbound API. Having multiple values set allows the same configuration to
+apply to multiple client applications, for example the `kubectl` CLI and an
+Internal Developer Portal.
+
+If you specify multiple `audiences` , `audienceMatchPolicy` must equal `MatchAny`.
+
+### Configure `claimMappings`
+
+#### Username claim mapping
+
+By default, the authenticator uses the `sub` claim as the user name. To override this, either:
+
+- specify *both* `claim` and `prefix`. `prefix` may be explicitly set to the empty string.
+or
+
+- specify a CEL `expression` to calculate the user name.
+
+```yaml
+claimMappings:
+ username:
+ claim: "sub"
+ prefix: "keycloak"
+ #
+ expression: 'claims.username + ":external-user"'
+```
+
+
+#### Groups claim mapping
+
+By default, this configuration doesn't map groups, unless you either:
+
+- specify both `claim` and `prefix`. `prefix` may be explicitly set to the empty string.
+or
+
+- specify a CEL `expression` that returns a string or list of strings.
+
+
+```yaml
+claimMappings:
+ groups:
+ claim: "groups"
+ prefix: ""
+ #
+ expression: 'claims.roles.split(",")'
+```
+
+
+### Validation rules
+
+
+Validation rules are outside the scope of this document. Review the
+[documentation][structured-auth-config] for more information. Examples include
+using CEL expressions to validate authentication such as:
+
+
+- Validating that a token claim has a specific value
+- Validating that a token has a limited lifetime
+- Ensuring usernames and groups don't contain reserved prefixes
+
+## Required claims
+
+To interact with Space and ControlPlane APIs, users must have the `upbound.io/aud` claim set to one of the following:
+
+| Upbound.io Audience | Notes |
+| -------------------------------------------------------- | -------------------------------------------------------------------- |
+| `[]` | No Access to Space-level or ControlPlane APIs |
+| `['upbound:spaces:api']` | This Identity is only for Space-level APIs |
+| `['upbound:spaces:controlplanes']` | This Identity is only for ControlPlane APIs |
+| `['upbound:spaces:api', 'upbound:spaces:controlplanes']` | This Identity is for both Space-level and ControlPlane APIs |
+
+
+You can set this claim in two ways:
+
+- In the identity provider mapped in the ID token.
+- Inject in the authenticator with the `jwt.claimMappings.extra` array.
+
+For example:
+```yaml
+apiVersion: apiserver.config.k8s.io/v1beta1
+kind: AuthenticationConfiguration
+jwt:
+- issuer:
+ url: https://keycloak:8443/realms/master
+ certificateAuthority: |-
+
+ audiences:
+ - master-realm
+ audienceMatchPolicy: MatchAny
+ claimMappings:
+ username:
+ claim: "preferred_username"
+ prefix: "keycloak:"
+ groups:
+ claim: "groups"
+ prefix: ""
+ extra:
+ - key: 'upbound.io/aud'
+ valueExpression: "['upbound:spaces:controlplanes', 'upbound:spaces:api']"
+```
+
+## Install the `AuthenticationConfiguration`
+
+Once you create an `AuthenticationConfiguration` file, specify this file as a
+`ConfigMap` in the host cluster for the Upbound Space.
+
+```sh
+kubectl create configmap -n upbound-system --from-file=config.yaml=./path/to/config.yaml
+```
+
+
+To enable OIDC authentication and disable Upbound IAM when installing the Space,
+reference the configuration and pass an empty value to the Upbound IAM issuer
+parameter:
+
+
+```sh
+up space init --token-file="${SPACES_TOKEN_PATH}" "v${SPACES_VERSION}" \
+ ...
+ --set "authentication.structuredConfig=" \
+ --set "router.controlPlane.extraArgs[0]=--upbound-iam-issuer-url="
+```
+
+## Configure RBAC
+
+
+In this scenario, the external identity provider handles authentication, but
+permissions for Spaces and ControlPlane APIs use standard RBAC objects.
+
+### Spaces APIs
+
+The Spaces APIs include:
+```yaml
+- apiGroups:
+ - spaces.upbound.io
+ resources:
+ - controlplanes
+ - sharedexternalsecrets
+ - sharedsecretstores
+ - backups
+ - backupschedules
+ - sharedbackups
+ - sharedbackupconfigs
+ - sharedbackupschedules
+- apiGroups:
+ - observability.spaces.upbound.io
+ resources:
+ - sharedtelemetryconfigs
+```
+
+### ControlPlane APIs
+
+
+
+Crossplane specifies three [roles][crossplane-managed-clusterroles] for a
+ControlPlane: admin, editor, and viewer. These map to the verbs `admin`, `edit`,
+and `view` on the `controlplanes/k8s` resource in the `spaces.upbound.io` API
+group.
+
+
+### Control access
+
+The `groups` claim in the `AuthenticationConfiguration` allows you to control
+resource access when you create a `ClusterRoleBinding`. A `ClusterRole` defines
+the role parameters and a `ClusterRoleBinding` subject.
+
+The example below allows `admin` permissions for all ControlPlanes to members of
+the `ctp-admins` group:
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: allow-ctp-admin
+rules:
+- apiGroups:
+ - spaces.upbound.io
+ resources:
+ - controlplanes/k8s
+ verbs:
+ - admin
+```
+
+ctp-admins ClusterRoleBinding
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: allow-ctp-admin
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: allow-ctp-admin
+subjects:
+- apiGroup: rbac.authorization.k8s.io
+ kind: Group
+ name: ctp-admins
+```
+
+[structured-auth-config]: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#using-authentication-configuration
+[crossplane-managed-clusterroles]: https://github.com/crossplane/crossplane/blob/master/design/design-doc-rbac-manager.md#managed-rbac-clusterroles
+[upbound-rbac]: /manuals/platform/concepts/authorization/upbound-rbac
diff --git a/spaces_versioned_docs/version-1.13/howtos/self-hosted/proxies-config.md b/spaces_versioned_docs/version-1.13/howtos/self-hosted/proxies-config.md
new file mode 100644
index 000000000..422e47088
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/proxies-config.md
@@ -0,0 +1,26 @@
+---
+title: Proxied configuration
+sidebar_position: 20
+description: Configure Upbound within a proxied environment
+---
+
+
+
+
+When you install Upbound with Helm in a proxied environment, please update the specified registry with your internal registry.
+
+
+
+```bash
+helm -n upbound-system upgrade --install spaces \
+ oci://xpkg.upbound.io/spaces-artifacts/spaces \
+ --version "${SPACES_VERSION}" \
+ --set "ingress.host=${SPACES_ROUTER_HOST}" \
+ --set "account=${UPBOUND_ACCOUNT}" \
+ --set "authentication.hubIdentities=true" \
+ --set "authorization.hubRBAC=true" \
+ --set "registry=registry.company.corp/spaces" \
+ --set "controlPlanes.uxp.registryOverride=registry.company.corp/xpkg.upbound.io" \
+ --set "controlPlanes.uxp.repository=registry.company.corp/spaces" \
+ --wait
+```
diff --git a/spaces_versioned_docs/version-1.13/howtos/self-hosted/query-api.md b/spaces_versioned_docs/version-1.13/howtos/self-hosted/query-api.md
new file mode 100644
index 000000000..3a01165dc
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/query-api.md
@@ -0,0 +1,388 @@
+---
+title: Deploy Query API infrastructure
+weight: 130
+description: Query API
+aliases:
+ - /all-spaces/self-hosted-spaces/query-api
+ - /self-hosted-spaces/query-api
+ - all-spaces/self-hosted-spaces/query-api
+---
+
+
+
+
+
+:::important
+
+This feature is in preview. The Query API is available in the Cloud Space offering in `v1.6` and enabled by default.
+
+This is a requirement to be able to connect a Space since `v1.8.0`, and is off by default, see below to enable it.
+
+:::
+
+Upbound's Query API allows users to inspect objects and resources within their control planes. The read-only `up alpha query` and `up alpha get` CLI commands allow you to gather information on your control planes in a fast and efficient package. These commands follow the [`kubectl` conventions][kubectl-conventions] for filtering, sorting, and retrieving information from your Space.
+
+Query API requires a PostgreSQL database to store the data. You can use the default PostgreSQL instance provided by Upbound or bring your own PostgreSQL instance.
+
+## Managed setup
+
+:::tip
+If you don't have specific requirements for your setup, Upbound recommends following this approach.
+:::
+
+To enable this feature, set `features.alpha.apollo.enabled=true` and `apollo.apollo.storage.postgres.create=true` when installing Spaces.
+
+However, you need to install CloudNativePG (`CNPG`) to provide the PostgreSQL instance. You can let the `up` CLI do this for you, or install it manually.
+
+For more customization, see the [Helm chart reference][helm-chart-reference]. You can modify the number
+of PostgreSQL instances, pooling instances, storage size, and more.
+
+If you have specific requirements not addressed in the Helm chart, see below for more information on how to bring your own [PostgreSQL setup][postgresql-setup].
+
+### Using the up CLI
+
+Before you begin, make sure you have the most recent version of the [`up` CLI installed][up-cli-installed].
+
+To enable this feature, set `features.alpha.apollo.enabled=true` and `apollo.apollo.storage.postgres.create=true` when installing Spaces:
+
+```bash
+up space init --token-file="${SPACES_TOKEN_PATH}" "v${SPACES_VERSION}" \
+ ...
+ --set "features.alpha.apollo.enabled=true" \
+ --set "apollo.apollo.storage.postgres.create=true"
+```
+
+`up space init` and `up space upgrade` install CloudNativePG automatically, if needed.
+
+### Helm chart
+
+If you are installing the Helm chart in some other way, you can manually install CloudNativePG in one of the [supported ways][supported-ways], for example:
+
+```shell
+kubectl apply --server-side -f \
+ https://github.com/cloudnative-pg/cloudnative-pg/releases/download/v1.24.1/cnpg-1.24.1.yaml
+kubectl rollout status -n cnpg-system deployment cnpg-controller-manager -w --timeout 120s
+```
+
+Next, install the Spaces Helm chart with the necessary values, for example:
+
+```shell
+helm -n upbound-system upgrade --install spaces \
+ oci://xpkg.upbound.io/spaces-artifacts/spaces \
+ --version "${SPACES_VERSION}" \
+ ...
+ --set "features.alpha.apollo.enabled=true" \
+ --set "apollo.apollo.storage.postgres.create=true" \
+ --wait
+```
+
+## Self-hosted PostgreSQL configuration
+
+
+If your workflow requires more customization, you can provide your own
+PostgreSQL instance and configure credentials manually.
+
+Using your own PostgreSQL instance requires careful architecture consideration.
+Review the architecture and requirements guidelines.
+
+### Architecture
+
+The Query API architecture uses three components, other than a PostgreSQL database:
+* **Apollo Syncers**: Watching `ETCD` for changes and syncing them to PostgreSQL. One, or more, per control plane.
+* **Apollo Server**: Serving the Query API out of the data in PostgreSQL. One, or more, per Space.
+
+The default setup also uses the `PgBouncer` connection pooler to manage connections from the syncers.
+```mermaid
+graph LR
+ User[User]
+
+ subgraph Cluster["Cluster (Spaces)"]
+ direction TB
+ Apollo[apollo]
+
+ subgraph ControlPlanes["Control Planes"]
+ APIServer[API Server]
+ Syncer[apollo-syncer]
+ end
+ end
+
+ PostgreSQL[(PostgreSQL)]
+
+ User -->|requests| Apollo
+
+ Apollo -->|connects| PostgreSQL
+ Apollo -->|creates schemas & users| PostgreSQL
+
+ Syncer -->|watches| APIServer
+ Syncer -->|writes| PostgreSQL
+
+ PostgreSQL -->|data| Apollo
+
+ style PostgreSQL fill:#e1f5ff,stroke:#333,stroke-width:2px,color:#000
+ style Apollo fill:#ffe1e1,stroke:#333,stroke-width:2px,color:#000
+ style Cluster fill:#f0f0f0,stroke:#333,stroke-width:2px,color:#000
+ style ControlPlanes fill:#fff,stroke:#666,stroke-width:1px,stroke-dasharray: 5 5,color:#000
+```
+
+
+Each component needs to connect to the PostgreSQL database.
+
+In the event of database issues, you can provide a new database and the syncers
+automatically repopulate the data.
+
+### Requirements
+
+* A PostgreSQL 16 instance or cluster.
+* A database, for example named `upbound`.
+* **Optional**: A dedicated user for the Apollo Syncers, otherwise the Spaces Controller generates a dedicated set of credentials per syncer with the necessary permissions, for example named `syncer`.
+* A dedicated **superuser or admin account** for the Apollo Server.
+* **Optional**: A connection pooler, like PgBouncer, to manage connections from the Apollo Syncers. If you didn't provide the optional users, you might have to configure the pooler to allow users to connect using the same credentials as PostgreSQL.
+* **Optional**: A read replica for the Apollo Syncers to connect to, to reduce load on the primary database, this might cause a slight delay in the data being available through the Query API.
+
+Below you can find examples of setups to get you started, you can mix and match the examples to suit your needs.
+
+### In-cluster setup
+
+:::tip
+
+If you don't have strong opinions on your setup, but still want full control on
+the resources created for some unsupported customizations, Upbound recommends
+the in-cluster setup.
+
+:::
+
+For more customization than the managed setup, you can use CloudNativePG for
+PostgreSQL in the same cluster.
+
+For in-cluster setup, manually deploy the operator in one of the [supported ways][supported-ways-1], for example:
+
+```shell
+kubectl apply --server-side -f \
+ https://github.com/cloudnative-pg/cloudnative-pg/releases/download/v1.24.1/cnpg-1.24.1.yaml
+kubectl rollout status -n cnpg-system deployment cnpg-controller-manager -w --timeout 120s
+```
+
+Then create a `Cluster` and `Pooler` in the `upbound-system` namespace, for example:
+
+```shell
+kubectl create ns upbound-system
+
+kubectl apply -f - <
+
+### External setup
+
+
+:::tip
+
+If you want to run your PostgreSQL instance outside the cluster, but are fine with credentials being managed by the `apollo` user, this is the suggested way to proceed.
+
+:::
+
+When using this setup, you must manually create the required Secrets in the
+`upbound-system` namespace. The `apollo` user must have permissions to create
+schemas and users.
+
+```shell
+
+kubectl create ns upbound-system
+
+# A Secret containing the necessary credentials to connect to the PostgreSQL instance
+kubectl create secret generic spaces-apollo-pg-app -n upbound-system \
+ --from-literal=password=supersecret
+
+# A Secret containing the necessary CA certificate to verify the connection to the PostgreSQL instance
+kubectl create secret generic spaces-apollo-pg-ca -n upbound-system \
+ --from-file=ca.crt=/path/to/ca.crt
+```
+
+Next, install Spaces with the necessary settings:
+
+```shell
+export PG_URL=your-postgres-host:5432
+export PG_POOLED_URL=your-pgbouncer-host:5432 # this could be the same as above
+
+helm upgrade --install ... \
+ --set "features.alpha.apollo.enabled=true" \
+ --set "apollo.apollo.storage.postgres.create=false" \
+ --set "apollo.apollo.storage.postgres.connection.url=$PG_URL" \
+ --set "apollo.apollo.storage.postgres.connection.credentials.secret.name=spaces-apollo-pg-app" \
+ --set "apollo.apollo.storage.postgres.connection.credentials.format=basicauth" \
+ --set "apollo.apollo.storage.postgres.connection.ca.name=spaces-apollo-pg-ca" \
+ --set "apollo.apollo.storage.postgres.connection.syncer.url=$PG_POOLED_URL"
+```
+
+### External setup with all custom credentials
+
+For custom credentials with Apollo Syncers or Server, create a new secret in the
+`upbound-system` namespace:
+
+```shell
+export APOLLO_SYNCER_USER=syncer
+export APOLLO_SERVER_USER=apollo
+
+kubectl create ns upbound-system
+
+# A Secret containing the necessary credentials to connect to the PostgreSQL instance
+kubectl create secret generic spaces-apollo-pg-app -n upbound-system \
+ --from-literal=password=supersecret
+
+# A Secret containing the necessary CA certificate to verify the connection to the PostgreSQL instance
+kubectl create secret generic spaces-apollo-pg-ca -n upbound-system \
+ --from-file=ca.crt=/path/to/ca.crt
+
+# A Secret containing the necessary credentials for the Apollo Syncers to connect to the PostgreSQL instance.
+# These will be used by all Syncers in the Space.
+kubectl create secret generic spaces-apollo-pg-syncer -n upbound-system \
+ --from-literal=username=$APOLLO_SYNCER_USER \
+ --from-literal=password=supersecret
+
+# A Secret containing the necessary credentials for the Apollo Server to connect to the PostgreSQL instance.
+kubectl create secret generic spaces-apollo-pg-apollo -n upbound-system \
+ --from-literal=username=$APOLLO_SERVER_USER \
+ --from-literal=password=supersecret
+```
+
+Next, install Spaces with the necessary settings:
+
+```shell
+export PG_URL=your-postgres-host:5432
+export PG_POOLED_URL=your-pgbouncer-host:5432 # this could be the same as above
+
+helm ... \
+ --set "features.alpha.apollo.enabled=true" \
+ --set "apollo.apollo.storage.postgres.create=false" \
+ --set "apollo.apollo.storage.postgres.connection.url=$PG_URL" \
+ --set "apollo.apollo.storage.postgres.connection.credentials.secret.name=spaces-apollo-pg-app" \
+ --set "apollo.apollo.storage.postgres.connection.credentials.format=basicauth" \
+ --set "apollo.apollo.storage.postgres.connection.ca.name=spaces-apollo-pg-ca" \
+ --set "apollo.apollo.storage.postgres.connection.syncer.url=$PG_POOLED_URL" \
+
+ #. the syncers
+ --set "apollo.apollo.storage.postgres.connection.syncer.credentials.format=basicauth" \
+ --set "apollo.apollo.storage.postgres.connection.syncer.credentials.user=$APOLLO_SYNCER_USER" \
+ --set "apollo.apollo.storage.postgres.connection.syncer.credentials.secret.name=spaces-apollo-pg-syncer" \
+
+ #. the server
+ --set "apollo.apollo.storage.postgres.connection.apollo.credentials.format=basicauth" \
+ --set "apollo.apollo.storage.postgres.connection.apollo.credentials.user=$APOLLO_SERVER_USER" \
+ --set "apollo.apollo.storage.postgres.connection.apollo.credentials.secret.name=spaces-apollo-pg-apollo" \
+ --set "apollo.apollo.storage.postgres.connection.apollo.url=$PG_POOLED_URL"
+```
+
+
+## Using the Query API
+
+
+See the [Query API documentation][query-api-documentation] for more information on how to use the Query API.
+
+
+
+
+[postgresql-setup]: #self-hosted-postgresql-configuration
+[up-cli-installed]: /manuals/cli/overview
+[query-api-documentation]: /spaces/howtos/query-api
+
+[helm-chart-reference]: /reference/helm-reference
+[kubectl-conventions]: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/
+[supported-ways]: https://cloudnative-pg.io/documentation/current/installation_upgrade/
+[supported-ways-1]: https://cloudnative-pg.io/documentation/current/installation_upgrade/
+[cloudnativepg-documentation]: https://cloudnative-pg.io/documentation/1.24/storage/#configuration-via-a-pvc-template
+[postgresql-cluster]: https://cloudnative-pg.io/documentation/1.24/resource_management/
+[pooler]: https://cloudnative-pg.io/documentation/1.24/connection_pooling/#pod-templates
+[postgresql-cluster-2]: https://cloudnative-pg.io/documentation/1.24/replication/
+[pooler-3]: https://cloudnative-pg.io/documentation/1.24/connection_pooling/#high-availability-ha
+[postgresql-cluster-4]: https://cloudnative-pg.io/documentation/1.24/operator_capability_levels/#override-of-operand-images-through-the-crd
+[pooler-5]: https://cloudnative-pg.io/documentation/1.24/connection_pooling/#pod-templates
+[cloudnativepg-documentation-6]: https://cloudnative-pg.io/documentation/1.24/postgresql_conf/
diff --git a/spaces_versioned_docs/version-1.13/howtos/self-hosted/scaling-resources.md b/spaces_versioned_docs/version-1.13/howtos/self-hosted/scaling-resources.md
new file mode 100644
index 000000000..0b3a21257
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/scaling-resources.md
@@ -0,0 +1,179 @@
+---
+title: Scaling vCluster and etcd Resources
+weight: 950
+description: A guide for scaling vCluster and etcd resources in self-hosted Spaces
+aliases:
+ - /all-spaces/self-hosted-spaces/scaling-resources
+ - /spaces/scaling-resources
+---
+
+In large workloads or control plane migration, you may performance impacting
+resource constraints. This guide explains how to scale vCluster and `etcd`
+resources for optimal performance in your self-hosted Space.
+
+
+## Signs of resource constraints
+
+You may need to scale your vCluster or `etcd` resources if you observe:
+
+- API server timeout errors such as `http: Handler timeout`
+- Error messages about `too many requests` and requests to `try again later`
+- Operations like provider installation failing with errors like `cannot apply provider package secret`
+- vCluster pods experiencing continuous restarts
+- API performance degrades with high resource volume
+
+
+## Scaling vCluster resources
+
+
+The vCluster component handles Kubernetes API requests for your control planes.
+Deployments with multiple control planes or providers may exceed default resource allocations.
+
+```yaml
+# Default settings
+controlPlanes.vcluster.resources.limits.cpu: "3000m"
+controlPlanes.vcluster.resources.limits.memory: "3960Mi"
+controlPlanes.vcluster.resources.requests.cpu: "170m"
+controlPlanes.vcluster.resources.requests.memory: "1320Mi"
+```
+
+For larger workloads, like migrating from an existing control plane with several
+providers, increase these resource limits in your Spaces `values.yaml` file.
+
+```yaml
+controlPlanes:
+ vcluster:
+ resources:
+ limits:
+ cpu: "4000m" # Increase to 4 cores
+ memory: "6Gi" # Increase to 6GB memory
+ requests:
+ cpu: "500m" # Increase baseline CPU request
+ memory: "2Gi" # Increase baseline memory request
+```
+
+## Scaling `etcd` storage
+
+Kubernetes relies on `etcd` performance, which can lead to IOPS (input/output
+operations per second) bottlenecks. Upbound allocates `50Gi` volumes for `etcd`
+in cloud environments to ensure adequate IOPS performance.
+
+```yaml
+# Default setting
+controlPlanes.etcd.persistence.size: "5Gi"
+```
+
+For production environments or when migrating large control planes, increase
+`etcd` volume size and specify an appropriate storage class:
+
+```yaml
+controlPlanes:
+ etcd:
+ persistence:
+ size: "50Gi" # Recommended for production
+ storageClassName: "fast-ssd" # Use a high-performance storage class
+```
+
+### Storage class considerations
+
+For AWS:
+- Use GP3 volumes with adequate IOPS
+-. AWS GP3 volumes, IOPS scale with volume size (3000 IOPS baseline)
+-. optimal performance, provision at least 32Gi to support up to 16,000 IOPS
+
+For GCP and Azure:
+- Use SSD-based persistent disk types for optimal performance
+- Consider premium storage options for high-throughput workloads
+
+## Scaling Crossplane resources
+
+Crossplane manages provider resources in your control planes. You may need to increase provider resources for larger deployments:
+
+```yaml
+# Default settings
+controlPlanes.uxp.resourcesCrossplane.requests.cpu: "370m"
+controlPlanes.uxp.resourcesCrossplane.requests.memory: "400Mi"
+```
+
+
+For environments with many providers or managed resources:
+
+
+```yaml
+controlPlanes:
+ uxp:
+ resourcesCrossplane:
+ limits:
+ cpu: "1000m" # Add CPU limit
+ memory: "1Gi" # Add memory limit
+ requests:
+ cpu: "500m" # Increase CPU request
+ memory: "512Mi" # Increase memory request
+```
+
+## High availability configuration
+
+For production environments, enable High Availability mode to ensure resilience:
+
+```yaml
+controlPlanes:
+ ha:
+ enabled: true
+```
+
+## Best practices for migration scenarios
+
+When migrating from existing control planes into a self-hosted Space:
+
+1. **Pre-scale resources**: Scale up resources before performing the migration
+2. **Monitor resource usage**: Watch resource consumption during and after migration with `kubectl top pods`
+3. **Scale incrementally**: If issues persist, increase resources incrementally until performance stabilizes
+4. **Consider storage performance**: `etcd` is sensitive to storage I/O performance
+
+## Helm values configuration
+
+Apply these settings through your Spaces Helm values file:
+
+```yaml
+controlPlanes:
+ vcluster:
+ resources:
+ limits:
+ cpu: "4000m"
+ memory: "6Gi"
+ requests:
+ cpu: "500m"
+ memory: "2Gi"
+ etcd:
+ persistence:
+ size: "50Gi"
+ storageClassName: "gp3" # Use your cloud provider's fast storage class
+ uxp:
+ resourcesCrossplane:
+ limits:
+ cpu: "1000m"
+ memory: "1Gi"
+ requests:
+ cpu: "500m"
+ memory: "512Mi"
+ ha:
+ enabled: true #. production environments
+```
+
+Apply the configuration using Helm:
+
+```bash
+helm upgrade --install spaces oci://xpkg.upbound.io/spaces-artifacts/spaces \
+ -f values.yaml \
+ -n upbound-system
+```
+
+## Considerations
+
+- **Provider count**: Each provider adds resource overhead - consider using provider families to optimize resource usage
+- **Managed resources**: The number of managed resources impacts CPU usage more than memory
+- **Vertical pod autoscaling**: Consider using vertical pod autoscaling in Kubernetes to automatically adjust resources based on usage
+- **Storage performance**: Storage performance is as important as capacity for etcd
+- **Network latency**: Low-latency connections between components improve performance
+
+
diff --git a/spaces_versioned_docs/version-1.13/howtos/self-hosted/self-hosted-spaces-deployment.md b/spaces_versioned_docs/version-1.13/howtos/self-hosted/self-hosted-spaces-deployment.md
new file mode 100644
index 000000000..e549e3939
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/self-hosted-spaces-deployment.md
@@ -0,0 +1,461 @@
+---
+title: Deployment Workflow
+sidebar_position: 3
+description: A quickstart guide for Upbound Spaces
+tier: "business"
+---
+import GlobalLanguageSelector, { CodeBlock } from '@site/src/components/GlobalLanguageSelector';
+
+
+
+
+
+This guide deploys a self-hosted Upbound cluster in AWS.
+
+
+
+
+
+This guide deploys a self-hosted Upbound cluster in Azure.
+
+
+
+
+
+This guide deploys a self-hosted Upbound cluster in GCP.
+
+
+
+Disconnected Spaces allows you to host control planes in your preferred environment.
+
+## Prerequisites
+
+To get started deploying your own Disconnected Space, you need:
+
+- An Upbound organization account string, provided by your Upbound account representative
+- A `token.json` license, provided by your Upbound account representative
+
+
+
+- An AWS account and the AWS CLI
+
+
+
+
+
+- An Azure account and the Azure CLI
+
+
+
+
+
+- An GCP account and the GCP CLI
+
+
+
+:::important
+Disconnected Spaces are a business critical feature of Upbound and requires a license token to successfully complete the installation. [Contact Upbound][contact-upbound] if you want to try out Upbound with Disconnected Spaces.
+:::
+
+## Provision the hosting environment
+
+### Create a cluster
+
+
+
+Configure the name and target region you want the EKS cluster deployed to.
+
+```ini
+export SPACES_CLUSTER_NAME=upbound-space-quickstart
+export SPACES_REGION=us-east-1
+```
+
+Provision a 3-node cluster using eksctl.
+
+```bash
+cat <
+
+
+
+Configure the name and target region you want the AKS cluster deployed to.
+
+```ini
+export SPACES_RESOURCE_GROUP_NAME=upbound-space-quickstart
+export SPACES_CLUSTER_NAME=upbound-space-quickstart
+export SPACES_LOCATION=westus
+```
+
+Provision a new Azure resource group.
+
+```bash
+az group create --name ${SPACES_RESOURCE_GROUP_NAME} --location ${SPACES_LOCATION}
+```
+
+Provision a 3-node cluster.
+
+```bash
+az aks create -g ${SPACES_RESOURCE_GROUP_NAME} -n ${SPACES_CLUSTER_NAME} \
+ --enable-managed-identity \
+ --node-count 3 \
+ --node-vm-size Standard_D4s_v4 \
+ --enable-addons monitoring \
+ --enable-msi-auth-for-monitoring \
+ --generate-ssh-keys \
+ --network-plugin kubenet \
+ --network-policy calico
+```
+
+Get the kubeconfig of your AKS cluster.
+
+```bash
+az aks get-credentials --resource-group ${SPACES_RESOURCE_GROUP_NAME} --name ${SPACES_CLUSTER_NAME}
+```
+
+
+
+
+
+Configure the name and target region you want the GKE cluster deployed to.
+
+```ini
+export SPACES_PROJECT_NAME=upbound-spaces-project
+export SPACES_CLUSTER_NAME=upbound-spaces-quickstart
+export SPACES_LOCATION=us-west1-a
+```
+
+Create a new project and set it as the current project.
+
+```bash
+gcloud projects create ${SPACES_PROJECT_NAME}
+gcloud config set project ${SPACES_PROJECT_NAME}
+```
+
+Provision a 3-node cluster.
+
+```bash
+gcloud container clusters create ${SPACES_CLUSTER_NAME} \
+ --enable-network-policy \
+ --num-nodes=3 \
+ --zone=${SPACES_LOCATION} \
+ --machine-type=e2-standard-4
+```
+
+Get the kubeconfig of your GKE cluster.
+
+```bash
+gcloud container clusters get-credentials ${SPACES_CLUSTER_NAME} --zone=${SPACES_LOCATION}
+```
+
+
+
+## Configure the pre-install
+
+### Set your Upbound organization account details
+
+Set your Upbound organization account string as an environment variable for use in future steps
+
+```ini
+export UPBOUND_ACCOUNT=
+```
+
+### Set up pre-install configurations
+
+Export the path of the license token JSON file provided by your Upbound account representative.
+
+```ini {copy-lines="2"}
+# Change the path to where you saved the token.
+export SPACES_TOKEN_PATH="/path/to/token.json"
+```
+
+Set the version of Spaces software you want to install.
+
+```ini
+export SPACES_VERSION=
+```
+
+Set the router host and cluster type. The `SPACES_ROUTER_HOST` is the domain name that's used to access the control plane instances. It's used by the ingress controller to route requests.
+
+```ini
+export SPACES_ROUTER_HOST="proxy.upbound-127.0.0.1.nip.io"
+```
+
+:::important
+Make sure to replace the placeholder text in `SPACES_ROUTER_HOST` and provide a real domain that you own.
+:::
+
+
+## Install the Spaces software
+
+
+### Install cert-manager
+
+Install cert-manager.
+
+```bash
+kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.yaml
+kubectl wait deployment -n cert-manager cert-manager-webhook --for condition=Available=True --timeout=360s
+```
+
+
+
+### Install ALB Load Balancer
+
+```bash
+helm install aws-load-balancer-controller aws-load-balancer-controller --namespace kube-system \
+ --repo https://aws.github.io/eks-charts \
+ --set clusterName=${SPACES_CLUSTER_NAME} \
+ --set serviceAccount.create=false \
+ --set serviceAccount.name=aws-load-balancer-controller \
+ --wait
+```
+
+
+
+### Install ingress-nginx
+
+Starting with Spaces v1.10.0, you need to configure the ingress-nginx
+controller to allow SSL-passthrough mode. You can do so by passing the
+`--enable-ssl-passthrough=true` command-line option to the controller.
+The following Helm install command enables this with the `controller.extraArgs`
+parameter:
+
+
+
+```bash
+helm upgrade --install ingress-nginx ingress-nginx \
+ --create-namespace --namespace ingress-nginx \
+ --repo https://kubernetes.github.io/ingress-nginx \
+ --version 4.12.1 \
+ --set 'controller.service.type=LoadBalancer' \
+ --set 'controller.extraArgs.enable-ssl-passthrough=true' \
+ --set 'controller.service.annotations.service\.beta\.kubernetes\.io/aws-load-balancer-type=external' \
+ --set 'controller.service.annotations.service\.beta\.kubernetes\.io/aws-load-balancer-scheme=internet-facing' \
+ --set 'controller.service.annotations.service\.beta\.kubernetes\.io/aws-load-balancer-nlb-target-type=ip' \
+ --set 'controller.service.annotations.service\.beta\.kubernetes\.io/aws-load-balancer-healthcheck-protocol=http' \
+ --set 'controller.service.annotations.service\.beta\.kubernetes\.io/aws-load-balancer-healthcheck-path=/healthz' \
+ --set 'controller.service.annotations.service\.beta\.kubernetes\.io/aws-load-balancer-healthcheck-port=10254' \
+ --wait
+```
+
+
+
+
+
+```bash
+helm upgrade --install ingress-nginx ingress-nginx \
+ --create-namespace --namespace ingress-nginx \
+ --repo https://kubernetes.github.io/ingress-nginx \
+ --version 4.12.1 \
+ --set 'controller.service.type=LoadBalancer' \
+ --set 'controller.extraArgs.enable-ssl-passthrough=true' \
+ --set 'controller.service.annotations.service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path=/healthz' \
+ --wait
+```
+
+
+
+
+
+```bash
+helm upgrade --install ingress-nginx ingress-nginx \
+ --create-namespace --namespace ingress-nginx \
+ --repo https://kubernetes.github.io/ingress-nginx \
+ --version 4.12.1 \
+ --set 'controller.service.type=LoadBalancer' \
+ --set 'controller.extraArgs.enable-ssl-passthrough=true' \
+ --wait
+```
+
+
+
+### Install Upbound Spaces software
+
+Create an image pull secret so that the cluster can pull Upbound Spaces images.
+
+```bash
+kubectl create ns upbound-system
+kubectl -n upbound-system create secret docker-registry upbound-pull-secret \
+ --docker-server=https://xpkg.upbound.io \
+ --docker-username="$(jq -r .accessId $SPACES_TOKEN_PATH)" \
+ --docker-password="$(jq -r .token $SPACES_TOKEN_PATH)"
+```
+
+Log in with Helm to be able to pull chart images for the installation commands.
+
+```bash
+jq -r .token $SPACES_TOKEN_PATH | helm registry login xpkg.upbound.io -u $(jq -r .accessId $SPACES_TOKEN_PATH) --password-stdin
+```
+
+Install the Spaces software.
+
+```bash
+helm -n upbound-system upgrade --install spaces \
+ oci://xpkg.upbound.io/spaces-artifacts/spaces \
+ --version "${SPACES_VERSION}" \
+ --set "ingress.host=${SPACES_ROUTER_HOST}" \
+ --set "account=${UPBOUND_ACCOUNT}" \
+ --set "authentication.hubIdentities=true" \
+ --set "authorization.hubRBAC=true" \
+ --wait
+```
+
+### Create a DNS record
+
+:::important
+If you chose to create a public ingress, you also need to create a DNS record for the load balancer of the public facing ingress. Do this before you create your first control plane.
+:::
+
+Create a DNS record for the load balancer of the public facing ingress. To get the address for the Ingress, run the following:
+
+
+
+```bash
+kubectl get ingress \
+ -n upbound-system mxe-router-ingress \
+ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
+```
+
+
+
+
+
+```bash
+kubectl get ingress \
+ -n upbound-system mxe-router-ingress \
+ -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
+```
+
+
+
+
+
+```bash
+kubectl get ingress \
+ -n upbound-system mxe-router-ingress \
+ -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
+```
+
+
+
+If the preceding command doesn't return a load balancer address then your provider may not have allocated it yet. Once it's available, add a DNS record for the `ROUTER_HOST` to point to the given load balancer address. If it's an IPv4 address, add an A record. If it's a domain name, add a CNAME record.
+
+## Configure the up CLI
+
+With your kubeconfig pointed at the Kubernetes cluster where you installed
+Upbound Spaces, create a new profile in the `up` CLI. This profile interacts
+with your Space:
+
+```bash
+up profile create --use ${SPACES_CLUSTER_NAME} --type=disconnected --organization ${UPBOUND_ACCOUNT}
+```
+
+Optionally, log in to your Upbound account using the new profile so you can use the Upbound Marketplace with this profile as well:
+
+```bash
+up login
+```
+
+
+## Connect to your Space
+
+
+Use `up ctx` to create a kubeconfig context pointed at your new Space:
+
+```bash
+up ctx disconnected/$(kubectl config current-context)
+```
+
+## Create your first control plane
+
+You can now create a control plane with the `up` CLI:
+
+```bash
+up ctp create ctp1
+```
+
+You can also create a control plane with kubectl:
+
+```yaml
+cat <
+```yaml
+observability:
+ spacesCollector:
+ env:
+ - name: API_KEY
+ valueFrom:
+ secretKeyRef:
+ name: my-secret
+ key: api-key
+ config:
+ exporters:
+ otlphttp:
+ endpoint: ""
+ headers:
+ api-key: ${env:API_KEY}
+ exportPipeline:
+ logs:
+ - otlphttp
+ metrics:
+ - otlphttp
+ traces:
+ - otlphttp
+```
+
+
+You can export metrics, logs, and traces from your Crossplane installation, Spaces
+infrastructure (controller, API, router, etc.), provider-helm, and
+provider-kubernetes.
+
+### Router metrics
+
+The Spaces router component uses Envoy as a reverse proxy and exposes detailed
+metrics about request handling, circuit breakers, and connection pooling.
+Upbound collects these metrics in your Space after you enable Space-level
+observability.
+
+Envoy metrics in Upbound include:
+
+- **Upstream cluster metrics** - Request status codes, timeouts, retries, and latency for traffic to control planes and services
+- **Circuit breaker metrics** - Connection and request circuit breaker state for both `DEFAULT` and `HIGH` priority levels
+- **Downstream listener metrics** - Client connections and requests received
+- **HTTP connection manager metrics** - End-to-end HTTP request processing and latency
+
+For a complete list of available router metrics and example PromQL queries, see the [Router metrics reference][router-ref].
+
+### Router tracing
+
+The Spaces router generates distributed traces through OpenTelemetry integration,
+providing end-to-end visibility into request flow across the system. Use these
+traces to debug latency issues, understand request paths, and correlate errors
+across services.
+
+The router uses:
+
+- **Protocol**: OTLP (OpenTelemetry Protocol) over gRPC
+- **Service name**: `spaces-router`
+- **Transport**: TLS-encrypted connection to telemetry collector
+
+#### Trace configuration
+
+Enable tracing and configure the sampling rate with the following Helm values:
+
+```yaml
+observability:
+ enabled: true
+ tracing:
+ enabled: true
+ sampling:
+ rate: 0.1 # Sample 10% of new traces (0.0-1.0)
+```
+
+The sampling behavior depends on whether a parent trace context exists:
+
+- **With parent context**: If a `traceparent` header is present, the parent's
+ sampling decision is respected, enabling proper distributed tracing across services.
+- **Root spans**:. new traces without a parent, Envoy samples based on
+ `x-request-id` hashing. The default sampling rate is 10%.
+
+#### TLS configuration for external collectors
+
+To send traces to an external OTLP collector, configure the endpoint and TLS settings:
+
+```yaml
+observability:
+ enabled: true
+ tracing:
+ enabled: true
+ endpoint: "otlp-gateway.example.com"
+ port: 443
+ tls:
+ caBundleSecretRef: "custom-ca-secret"
+```
+
+If `caBundleSecretRef` is set, the router uses the CA bundle from the referenced
+Kubernetes secret. The secret must contain a key named `ca.crt` with the
+PEM-encoded CA bundle. If not set, the router uses the Spaces CA for the
+in-cluster collector.
+
+#### Custom trace tags
+
+The router adds custom tags to every span to enable filtering and grouping by
+control plane:
+
+| Tag | Source | Description |
+|-----|--------|-------------|
+| `controlplane.id` | `x-upbound-mxp-id` header | Control plane UUID |
+| `controlplane.name` | `x-upbound-mxp-host` header | Internal vcluster hostname |
+| `hostcluster.id` | `x-upbound-hostcluster-id` header | Host cluster identifier |
+
+These tags enable queries like "show all slow requests to control plane X" or
+"find errors for control planes in host cluster Y."
+
+#### Example trace
+
+The following example shows the attributes from a successful GET request:
+
+```text
+Span: ingress
+├─ Service: spaces-router
+├─ Duration: 8.025ms
+├─ Attributes:
+│ ├─ http.method: GET
+│ ├─ http.status_code: 200
+│ ├─ upstream_cluster: ctp-b2b37aaa-ee55-492c-ba0c-4d561a6325fa-api-cluster
+│ ├─ controlplane.id: b2b37aaa-ee55-492c-ba0c-4d561a6325fa
+│ ├─ controlplane.name: vcluster.mxp-b2b37aaa-ee55-492c-ba0c-4d561a6325fa-system
+│ └─ response_size: 1827
+```
+
+## Available metrics
+
+Space-level observability collects metrics from multiple infrastructure components:
+
+### Infrastructure component metrics
+
+- Crossplane controller metrics
+- Spaces controller, API, and router metrics
+- Provider metrics (provider-helm, provider-kubernetes)
+
+### Router metrics
+
+The router component exposes Envoy proxy metrics for monitoring traffic flow and
+service health. Key metric categories include:
+
+- `envoy_cluster_upstream_rq_*` - Upstream request metrics (status codes, timeouts, retries, latency)
+- `envoy_cluster_circuit_breakers_*` - Circuit breaker state and capacity
+- `envoy_listener_downstream_*` - Client connection and request metrics
+- `envoy_http_downstream_*` - HTTP request processing metrics
+
+Example query to monitor total request rate:
+
+```promql
+sum(rate(envoy_cluster_upstream_rq_total{job="spaces-router-envoy"}[5m]))
+```
+
+Example query for P95 latency:
+
+```promql
+histogram_quantile(
+ 0.95,
+ sum by (le) (
+ rate(envoy_cluster_upstream_rq_time_bucket{job="spaces-router-envoy"}[5m])
+ )
+)
+```
+
+For detailed router metrics documentation and more query examples, see the [Router metrics reference][router-ref].
+
+
+## OpenTelemetryCollector image
+
+
+Control plane (`SharedTelemetry`) and Space observability deploy the same custom
+OpenTelemetry Collector image. The OpenTelemetry Collector image supports
+`otlhttp`, `datadog`, and `debug` exporters.
+
+For more information on observability configuration, review the [Helm chart reference][helm-chart-reference].
+
+## Observability in control planes
+
+Read the [observability documentation][observability-documentation] to learn
+about the features Upbound offers for collecting telemetry from control planes.
+
+
+## Router metrics reference {#router-ref}
+
+To avoid overwhelming observability tools with hundreds of Envoy metrics, an
+allow-list filters metrics to only the following metric families.
+
+### Upstream cluster metrics
+
+Metrics tracking requests sent from Envoy to configured upstream clusters.
+Individual control planes, spaces-api, and other services are each considered
+an upstream cluster. Use these metrics to monitor service health, identify
+upstream errors, and measure backend latency.
+
+| Metric | Description |
+|--------|-------------|
+| `envoy_cluster_upstream_rq_xx_total` | HTTP status codes (2xx, 3xx, 4xx, 5xx) with label `envoy_response_code_class` |
+| `envoy_cluster_upstream_rq_timeout_total` | Requests that timed out waiting for upstream |
+| `envoy_cluster_upstream_rq_retry_limit_exceeded_total` | Requests that exhausted retry attempts |
+| `envoy_cluster_upstream_rq_total` | Total upstream requests |
+| `envoy_cluster_upstream_rq_time_bucket` | Latency histogram (for P50/P95/P99 calculations) |
+| `envoy_cluster_upstream_rq_time_sum` | Sum of request durations |
+| `envoy_cluster_upstream_rq_time_count` | Count of requests |
+
+### Circuit breaker metrics
+
+
+
+Metrics tracking circuit breaker state and remaining capacity. Circuit breakers
+prevent cascading failures by limiting connections and concurrent requests to
+unhealthy upstreams. Two priority levels exist: `DEFAULT` for watch requests and
+`HIGH` for API requests.
+
+
+| Name | Description |
+|--------|-------------|
+| `envoy_cluster_circuit_breakers_default_cx_open` | `DEFAULT` priority connection circuit breaker open (gauge) |
+| `envoy_cluster_circuit_breakers_default_rq_open` | `DEFAULT` priority request circuit breaker open (gauge) |
+| `envoy_cluster_circuit_breakers_default_remaining_cx` | Available `DEFAULT` priority connections (gauge) |
+| `envoy_cluster_circuit_breakers_default_remaining_rq` | Available `DEFAULT` priority request slots (gauge) |
+| `envoy_cluster_circuit_breakers_high_cx_open` | `HIGH` priority connection circuit breaker open (gauge) |
+| `envoy_cluster_circuit_breakers_high_rq_open` | `HIGH` priority request circuit breaker open (gauge) |
+| `envoy_cluster_circuit_breakers_high_remaining_cx` | Available `HIGH` priority connections (gauge) |
+| `envoy_cluster_circuit_breakers_high_remaining_rq` | Available `HIGH` priority request slots (gauge) |
+
+### Downstream listener metrics
+
+Metrics tracking requests received from clients such as kubectl and API consumers.
+Use these metrics to monitor client connection patterns, overall request volume,
+and responses sent to external users.
+
+| Name | Description |
+|--------|-------------|
+| `envoy_listener_downstream_rq_xx_total` | HTTP status codes for responses sent to clients |
+| `envoy_listener_downstream_rq_total` | Total requests received from clients |
+| `envoy_listener_downstream_cx_total` | Total connections from clients |
+| `envoy_listener_downstream_cx_active` | Currently active client connections (gauge) |
+
+
+
+### HTTP connection manager metrics
+
+
+Metrics from Envoy's HTTP connection manager tracking end-to-end request
+processing. These metrics provide a comprehensive view of the HTTP request
+lifecycle including status codes and client-perceived latency.
+
+| Name | Description |
+|--------|-------------|
+| `envoy_http_downstream_rq_xx` | HTTP status codes (note: no `_total` suffix for this metric family) |
+| `envoy_http_downstream_rq_total` | Total HTTP requests received |
+| `envoy_http_downstream_rq_time_bucket` | Downstream request latency histogram |
+| `envoy_http_downstream_rq_time_sum` | Sum of downstream request durations |
+| `envoy_http_downstream_rq_time_count` | Count of downstream requests |
+
+[router-ref]: #router-ref
+[observability-documentation]: /spaces/howtos/observability
+[opentelemetry-collector]: https://opentelemetry.io/docs/collector/
+[opentelemetry-operator]: https://opentelemetry.io/docs/kubernetes/operator/
+[helm-chart-reference]: /reference/helm-reference
diff --git a/spaces_versioned_docs/version-1.13/howtos/self-hosted/spaces-management.md b/spaces_versioned_docs/version-1.13/howtos/self-hosted/spaces-management.md
new file mode 100644
index 000000000..a9290acab
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/spaces-management.md
@@ -0,0 +1,214 @@
+---
+title: Interacting with Disconnected Spaces
+sidebar_position: 10
+description: Common operations in Spaces
+---
+
+
+## Spaces management
+
+### Create a Space
+
+To install an Upbound Space into a cluster, it's recommended you dedicate an entire Kubernetes cluster for the Space. You can use [up space init][up-space-init] to install an Upbound Space. Below is an example:
+
+```bash
+up space init "v1.9.0"
+```
+:::tip
+For a full guide to get started with Spaces, read the [quickstart][quickstart] guide:
+:::
+
+You can also install the helm chart for Spaces directly. In order for a Spaces install to succeed, you must install some prerequisites first and configure them. This includes:
+
+- UXP
+- provider-helm and provider-kubernetes
+- cert-manager
+
+Furthermore, the Spaces chart requires a pull secret, which Upbound must provide to you.
+
+```bash
+helm -n upbound-system upgrade --install spaces \
+ oci://xpkg.upbound.io/spaces-artifacts/spaces \
+ --version "v1.9.0" \
+ --set "ingress.host=your-host.com" \
+ --set "clusterType=eks" \
+ --set "account=your-upbound-account" \
+ --wait
+```
+For a complete tutorial of the helm install, read one of the deployment guides for [AWS][aws], [Azure][azure] , or [GCP][gcp] which cover the step-by-step process.
+
+### Upgrade a Space
+
+To upgrade a Space from one version to the next, use [up space upgrade][up-space-upgrade]. Spaces supports upgrading from version `ver x.N.*` to version `ver x.N+1.*`.
+
+```bash
+up space upgrade "v1.9.0"
+```
+
+You can also upgrade a Space by manually bumping the Helm chart version. Before
+upgrading, review the release notes for any breaking changes or
+special requirements:
+
+1. Review the release notes for the target version in the [Spaces Release Notes][spaces-release-notes]
+2. Upgrade the Space by updating the helm chart version:
+
+```bash
+helm -n upbound-system upgrade spaces \
+ oci://xpkg.upbound.io/spaces-artifacts/spaces \
+ --version "v1.9.0" \
+ --reuse-values \
+ --wait
+```
+
+For major version upgrades or configuration changes, extract your current values
+and adjust:
+
+```bash
+# Extract current values to a file
+helm -n upbound-system get values spaces > spaces-values.yaml
+
+# Upgrade with modified values
+helm -n upbound-system upgrade spaces \
+ oci://xpkg.upbound.io/spaces-artifacts/spaces \
+ --version "v1.9.0" \
+ -f spaces-values.yaml \
+ --wait
+```
+
+### Downgrade a Space
+
+To rollback a Space from one version to the previous, use [up space upgrade][up-space-upgrade-1]. Spaces supports downgrading from version `ver x.N.*` to version `ver x.N-1.*`.
+
+```bash
+up space upgrade --rollback
+```
+
+You can also downgrade a Space manually using Helm by specifying an earlier version:
+
+```bash
+helm -n upbound-system upgrade spaces \
+ oci://xpkg.upbound.io/spaces-artifacts/spaces \
+ --version "v1.8.0" \
+ --reuse-values \
+ --wait
+```
+
+When downgrading, make sure to:
+1. Check the [release notes][release-notes] for specific downgrade instructions
+2. Verify compatibility between the downgraded Space and any control planes
+3. Back up any critical data before proceeding
+
+### Uninstall a Space
+
+To uninstall a Space from a Kubernetes cluster, use [up space destroy][up-space-destroy]. A destroy operation uninstalls core components and orphans control planes and their associated resources.
+
+```bash
+up space destroy
+```
+
+## Control plane management
+
+You can manage control planes in a Space via the [up CLI][up-cli] or the Spaces-local Kubernetes API. When you install a Space, it defines new a API type, `kind: Controlplane`, that you can use to create and manage control planes in the Space.
+
+### Create a control plane
+
+To create a control plane in a Space using `up`, run the following:
+
+```bash
+up ctp create ctp1
+```
+
+You can also declare a new control plane like the example below and apply it to your Spaces cluster:
+
+```yaml
+apiVersion: spaces.upbound.io/v1beta1
+kind: ControlPlane
+metadata:
+ name: ctp1
+ namespace: default
+spec:
+ writeConnectionSecretToRef:
+ name: kubeconfig-ctp1
+ namespace: default
+```
+
+This manifest:
+
+- Creates a new control plane in the space called `ctp1`.
+- Publishes the kubeconfig to connect to the control plane to a secret in the Spaces cluster, called `kubeconfig-ctp1`
+
+### Connect to a control plane
+
+To connect to a control plane in a Space using `up`, run the following:
+
+```bash
+up ctp connect new-control-plane
+```
+
+The command changes your kubeconfig's current context to the control plane you specify. If you want to change your kubeconfig back to a previous context, run:
+
+```bash
+up ctp disconnect
+```
+
+If you configured your control plane to publish connection details, you can also access it this way. Once the control plane is ready, use the secret (containing connection details) to connect to the API server of your control plane.
+
+```bash
+kubectl get secret -n default -o jsonpath='{.data.kubeconfig}' | base64 -d > /tmp/.yaml
+```
+
+Reference the kubeconfig whenever you want to interact directly with the API server of the control plane (vs the Space's API server):
+
+```bash
+kubectl get providers --kubeconfig=/tmp/.yaml
+```
+
+### Configure a control plane
+
+Spaces offers a built-in feature that allows you to connect a control plane to a Git source. This experience is like when a control plane runs in [Upbound's SaaS environment][upbound-s-saas-environment]. Upbound recommends using the built-in Git integration to drive configuration of your control planes in a Space.
+
+Learn more in the [Spaces Git integration][spaces-git-integration] documentation.
+
+### List control planes
+
+To list all control planes in a Space using `up`, run the following:
+
+```bash
+up ctp list
+```
+
+Or you can use Kubernetes-style semantics to list the control plane:
+
+```bash
+kubectl get controlplanes
+```
+
+
+### Delete a control plane
+
+To delete a control plane in a Space using `up`, run the following:
+
+```bash
+up ctp delete ctp1
+```
+
+Or you can use Kubernetes-style semantics to delete the control plane:
+
+```bash
+kubectl delete controlplane ctp1
+```
+
+
+[up-space-init]: /reference/cli-reference
+[quickstart]: /
+[aws]: /spaces/howtos/self-hosted/self-hosted-spaces-deployment
+[azure]:/spaces/howtos/self-hosted/self-hosted-spaces-deployment
+[gcp]:/spaces/howtos/self-hosted/self-hosted-spaces-deployment
+[up-space-upgrade]: /reference/cli-reference
+[spaces-release-notes]: /reference/release-notes/spaces
+[up-space-upgrade-1]: /reference/cli-reference
+[release-notes]: /reference/release-notes/spaces
+[up-space-destroy]: /reference/cli-reference
+[up-cli]: /reference/cli-reference
+[upbound-s-saas-environment]: /spaces/howtos/self-hosted/spaces-management
+[spaces-git-integration]: /spaces/howtos/self-hosted/gitops
diff --git a/spaces_versioned_docs/version-1.13/howtos/self-hosted/troubleshooting.md b/spaces_versioned_docs/version-1.13/howtos/self-hosted/troubleshooting.md
new file mode 100644
index 000000000..8d1ca6517
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/troubleshooting.md
@@ -0,0 +1,132 @@
+---
+title: Troubleshooting
+sidebar_position: 100
+description: A guide for troubleshooting an issue that occurs in a Space
+---
+
+Find guidance below on how to find solutions for issues you encounter when deploying and using an Upbound Space. Use the tips below as a supplement to the observability metrics discussed in the [Observability][observability] page.
+
+## General tips
+
+Most issues fall into two general categories:
+
+1. issues with the Spaces management plane
+2. issues on a control plane
+
+If your control plane doesn't reach a `Ready` state, it's indicative of the former. If your control plane is in a created and running state, but resources aren't reconciling, it's indicative of the latter.
+
+### Spaces component layout
+
+Run `kubectl get pods -A` against the cluster hosting a Space. You should see a variety of pods across several namespaces. It should look something like this:
+
+```bash
+NAMESPACE NAME READY STATUS RESTARTS AGE
+cert-manager cert-manager-6d6769565c-mc5df 1/1 Running 0 25m
+cert-manager cert-manager-cainjector-744bb89575-nw4fg 1/1 Running 0 25m
+cert-manager cert-manager-webhook-759d6dcbf7-ps4mq 1/1 Running 0 25m
+ingress-nginx ingress-nginx-controller-7f8ccfccc6-6szlp 1/1 Running 0 25m
+kube-system coredns-5d78c9869d-4p477 1/1 Running 0 26m
+kube-system coredns-5d78c9869d-pdxt6 1/1 Running 0 26m
+kube-system etcd-kind-control-plane 1/1 Running 0 26m
+kube-system kindnet-8s7pq 1/1 Running 0 26m
+kube-system kube-apiserver-kind-control-plane 1/1 Running 0 26m
+kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 26m
+kube-system kube-proxy-l68r8 1/1 Running 0 26m
+kube-system kube-scheduler-kind-control-plane 1/1 Running 0 26m
+local-path-storage local-path-provisioner-6bc4bddd6b-qsdjt 1/1 Running 0 26m
+mxp-706c49fa-5bb8-4a7e-9f41-2fc38ef4b065-system coredns-5dc69d6447-f56rh-x-kube-system-x-vcluster 1/1 Running 0 21m
+mxp-706c49fa-5bb8-4a7e-9f41-2fc38ef4b065-system crossplane-6b6d67bc66-6b8nx-x-upbound-system-x-vcluster 1/1 Running 0 20m
+mxp-706c49fa-5bb8-4a7e-9f41-2fc38ef4b065-system crossplane-rbac-manager-78f6fc7cb4-pjkhc-x-upbound-s-12253c3c4e 1/1 Running 0 20m
+mxp-706c49fa-5bb8-4a7e-9f41-2fc38ef4b065-system kube-state-metrics-7f8f4dcc5b-8p8c4 1/1 Running 0 22m
+mxp-706c49fa-5bb8-4a7e-9f41-2fc38ef4b065-system mxp-gateway-68f546b9c8-xnz5j-x-upbound-system-x-vcluster 1/1 Running 0 20m
+mxp-706c49fa-5bb8-4a7e-9f41-2fc38ef4b065-system mxp-ksm-config-54655667bb-hv9br 1/1 Running 0 22m
+mxp-706c49fa-5bb8-4a7e-9f41-2fc38ef4b065-system mxp-readyz-5f7f97d967-b98bw 1/1 Running 0 22m
+mxp-706c49fa-5bb8-4a7e-9f41-2fc38ef4b065-system otlp-collector-56d7d46c8d-g5sh5-x-upbound-system-x-vcluster 1/1 Running 0 20m
+mxp-706c49fa-5bb8-4a7e-9f41-2fc38ef4b065-system vcluster-67c9fb8959-ppb2m 1/1 Running 0 22m
+mxp-706c49fa-5bb8-4a7e-9f41-2fc38ef4b065-system vcluster-api-6bfbccc49d-ffgpj 1/1 Running 0 22m
+mxp-706c49fa-5bb8-4a7e-9f41-2fc38ef4b065-system vcluster-controller-7cc6855656-8c46b 1/1 Running 0 22m
+mxp-706c49fa-5bb8-4a7e-9f41-2fc38ef4b065-system vcluster-etcd-0 1/1 Running 0 22m
+mxp-706c49fa-5bb8-4a7e-9f41-2fc38ef4b065-system vector-754b494b84-wljw4 1/1 Running 0 22m
+mxp-system mxp-charts-chartmuseum-7587f77558-8tltb 1/1 Running 0 23m
+upbound-system crossplane-b4dc7b4c9-6hjh5 1/1 Running 0 25m
+upbound-system crossplane-contrib-provider-helm-ce18dd03e6e4-7945d8985-4gcwr 1/1 Running 0 24m
+upbound-system crossplane-contrib-provider-kubernetes-1f1e32c1957d-577756gs2x4 1/1 Running 0 24m
+upbound-system crossplane-rbac-manager-d8cb49cbc-gbvvf 1/1 Running 0 25m
+upbound-system spaces-controller-6647677cf9-5zl5q 1/1 Running 0 24m
+upbound-system spaces-router-bc78c96d7-kzts2 2/2 Running 0 24m
+```
+
+What you are seeing is:
+
+- Pods in the `upbound-system` namespace are components required to run the management plane of the Space. This includes the `spaces-controller`, `spaces-router`, and install of UXP.
+- Pods in the `mxp-{GUID}-system` namespace are components that collectively power a control plane. Notable call outs include pod names that look like `vcluster-api-{GUID}` and `vcluster-controller-{GUID}`, which are integral components of a control plane.
+- Pods in other notable namespaces, including `cert-manager` and `ingress-nginx`, are prerequisite components that support a Space's successful operation.
+
+
+
+### Troubleshooting tips for the Spaces management plane
+
+Start by getting the status of all the pods in a Space:
+
+1. Make sure the current context of your kubeconfig points at the Kubernetes cluster hosting your Space
+2. Get the status of all the pods in the Space:
+```bash
+kubectl get pods -A
+```
+3. Scan the `Status` column to see if any of the pods report a status besides `Running`.
+4. Scan the `Restarts` column to see if any of the pods have restarted.
+5. If you notice a Status other than `Running` or see pods that restarted, you should investigate their events by running
+```bash
+kubectl describe pod -n
+```
+
+Next, inspect the status of objects and releases:
+
+1. Make sure the current context of your kubeconfig points at the Kubernetes cluster hosting your Space
+2. Inspect the objects in your Space. If any are unhealthy, describe those objects to get the events:
+```bash
+kubectl get objects
+```
+3. Inspect the releases in your Space. If any are unhealthy, describe those releases to get the events:
+```bash
+kubectl get releases
+```
+
+### Troubleshooting tips for control planes in a Space
+
+General troubleshooting in a control plane starts by fetching the events of the control plane:
+
+1. Make sure the current context of your kubeconfig points at the Kubernetes cluster hosting your Space
+2. Run the following to fetch your control planes.
+```bash
+kubectl get ctp
+```
+3. Describe the control plane by providing its name, found in the preceding instruction.
+```bash
+kubectl describe controlplanes.spaces.upbound.io
+```
+
+## Issues
+
+
+### Your control plane is stuck in a 'creating' state
+
+#### Error: unknown field "ports" in io.k8s.api.networking.v1.NetworkPolicySpec
+
+This error is emitted by a Helm release named `control-plane-host-policies` attempting to be installed by the Spaces software. The full error is:
+
+_CannotCreateExternalResource failed to install release: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(NetworkPolicy.spec): unknown field "ports" in io.k8s.api.networking.v1.NetworkPolicySpec_
+
+This error may be caused by running a Space on an earlier version of Kubernetes than is supported (`v1.26 or later`). To resolve this issue, upgrade the host Kubernetes cluster version to 1.25 or later.
+
+### Your Spaces install fails
+
+#### Error: You tried to install a Space on a previous Crossplane installation
+
+If you try to install a Space on an existing cluster that previously had Crossplane or UXP on it, you may encounter errors. Due to how the Spaces installer tests for the presence of UXP, it may detect orphaned CRDs that weren't cleaned up by the previous uninstall of Crossplane. You may need to manually [remove old Crossplane CRDs][remove-old-crossplane-crds] for the installer to properly detect the UXP prerequisite.
+
+
+
+
+[observability]: /spaces/howtos/observability
+[remove-old-crossplane-crds]: https://docs.crossplane.io/latest/guides/uninstall-crossplane/
diff --git a/spaces_versioned_docs/version-1.13/howtos/self-hosted/use-argo.md b/spaces_versioned_docs/version-1.13/howtos/self-hosted/use-argo.md
new file mode 100644
index 000000000..eff5558db
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/use-argo.md
@@ -0,0 +1,223 @@
+---
+title: Use ArgoCD Plugin
+sidebar_position: 15
+description: A guide for integrating Argo with control planes in a Space.
+aliases:
+ - /all-spaces/self-hosted-spaces/use-argo
+ - /deploy/disconnected-spaces/use-argo-flux
+ - /all-spaces/self-hosted-spaces/use-argo-flux
+ - /connect/use-argo
+---
+
+
+
+:::important
+This feature is in preview and is off by default. To enable, set `features.alpha.argocdPlugin.enabled=true` when installing Spaces:
+
+```bash
+up space init --token-file="${SPACES_TOKEN_PATH}" "v${SPACES_VERSION}" \
+ ...
+ --set "features.alpha.argocdPlugin.enabled=true"
+```
+:::
+
+Spaces provides an optional plugin to assist with integrating a control plane in a Space with Argo CD. You must enable the plugin for the entire Space at Spaces install or upgrade time. The plugin's job is to propagate the connection details of each control plane in a Space to Argo CD. By default, Upbound stores these connection details in a Kubernetes secret named after the control plane. To run Argo CD across multiple namespaces, Upbound recommends enabling the `features.alpha.argocdPlugin.useUIDFormatForCTPSecrets` flag to use a UID-based format for secret names to avoid conflicts.
+
+:::tip
+For general guidance on integrating Upbound with GitOps flows, see [GitOps with Control Planes][gitops-with-control-planes].
+:::
+
+## On cluster Argo CD
+
+If you are running Argo CD on the same cluster as the Space, run the following to enable the plugin:
+
+
+
+
+
+
+```bash {hl_lines="3-4"}
+up space init --token-file="${SPACES_TOKEN_PATH}" "v${SPACES_VERSION}" \
+ --set "account=${UPBOUND_ACCOUNT}" \
+ --set "features.alpha.argocdPlugin.enabled=true" \
+ --set "features.alpha.argocdPlugin.useUIDFormatForCTPSecrets=true" \
+ --set "features.alpha.argocdPlugin.target.secretNamespace=argocd"
+```
+
+
+
+
+
+```bash {hl_lines="7-8"}
+helm -n upbound-system upgrade --install spaces \
+ oci://xpkg.upbound.io/spaces-artifacts/spaces \
+ --version "${SPACES_VERSION}" \
+ --set "ingress.host=${SPACES_ROUTER_HOST}" \
+ --set "account=${UPBOUND_ACCOUNT}" \
+ --set "features.alpha.argocdPlugin.enabled=true" \
+ --set "features.alpha.argocdPlugin.useUIDFormatForCTPSecrets=true" \
+ --set "features.alpha.argocdPlugin.target.secretNamespace=argocd" \
+ --wait
+```
+
+
+
+
+
+
+The important flags are:
+
+- `features.alpha.argocdPlugin.enabled=true`
+- `features.alpha.argocdPlugin.useUIDFormatForCTPSecrets=true`
+- `features.alpha.argocdPlugin.target.secretNamespace=argocd`
+
+The first flag enables the feature and the second indicates the namespace on the cluster where you installed Argo CD.
+
+Be sure to [configure Argo][configure-argo] after it's installed.
+
+## External cluster Argo CD
+
+If you are running Argo CD on an external cluster from where you installed your Space, you need to provide some extra flags:
+
+
+
+
+
+
+```bash {hl_lines="3-7"}
+up space init --token-file="${SPACES_TOKEN_PATH}" "v${SPACES_VERSION}" \
+ --set "account=${UPBOUND_ACCOUNT}" \
+ --set "features.alpha.argocdPlugin.enabled=true" \
+ --set "features.alpha.argocdPlugin.useUIDFormatForCTPSecrets=true" \
+ --set "features.alpha.argocdPlugin.target.secretNamespace=argocd" \
+ --set "features.alpha.argocdPlugin.target.externalCluster.enabled=true" \
+ --set "features.alpha.argocdPlugin.target.externalCluster.secret.name=my-argo-cluster" \
+ --set "features.alpha.argocdPlugin.target.externalCluster.secret.key=kubeconfig"
+```
+
+
+
+
+
+```bash {hl_lines="7-11"}
+helm -n upbound-system upgrade --install spaces \
+ oci://xpkg.upbound.io/spaces-artifacts/spaces \
+ --version "${SPACES_VERSION}" \
+ --set "ingress.host=${SPACES_ROUTER_HOST}" \
+ --set "account=${UPBOUND_ACCOUNT}" \
+ --set "features.alpha.argocdPlugin.enabled=true" \
+ --set "features.alpha.argocdPlugin.useUIDFormatForCTPSecrets=true" \
+ --set "features.alpha.argocdPlugin.target.secretNamespace=argocd" \
+ --set "features.alpha.argocdPlugin.target.externalCluster.enabled=true" \
+ --set "features.alpha.argocdPlugin.target.externalCluster.secret.name=my-argo-cluster" \
+ --set "features.alpha.argocdPlugin.target.externalCluster.secret.key=kubeconfig" \
+ --wait
+```
+
+
+
+
+
+```bash
+helm -n upbound-system upgrade --install spaces \
+ oci://xpkg.upbound.io/spaces-artifacts/spaces \
+ --version "${SPACES_VERSION}" \
+ --set "ingress.host=${SPACES_ROUTER_HOST}" \
+ --set "account=${UPBOUND_ACCOUNT}" \
+ --set "features.alpha.argocdPlugin.enabled=true" \
+ --set "features.alpha.argocdPlugin.useUIDFormatForCTPSecrets=true" \
+ --set "features.alpha.argocdPlugin.target.secretNamespace=argocd" \
+ --set "features.alpha.argocdPlugin.target.externalCluster.enabled=true" \
+ --set "features.alpha.argocdPlugin.target.externalCluster.secret.name=my-argo-cluster" \
+ --set "features.alpha.argocdPlugin.target.externalCluster.secret.key=kubeconfig" \
+ --wait
+```
+
+The extra flags are:
+
+- `features.alpha.argocdPlugin.target.externalCluster.enabled=true`
+- `features.alpha.argocdPlugin.useUIDFormatForCTPSecrets=true`
+- `features.alpha.argocdPlugin.target.externalCluster.secret.name=my-argo-cluster`
+- `features.alpha.argocdPlugin.target.externalCluster.secret.key=kubeconfig`
+
+These flags tell the plugin (running in Spaces) where your Argo CD instance is. After you've done this at install-time, you also need to create a `Secret` on the Spaces cluster. This secret must contain a kubeconfig pointing to your Argo CD instance. The secret needs to be in the same namespace as the `spaces-controller`, which is `upbound-system`.
+
+Once you enable the plugin and configure it, the plugin automatically propagates connection details for your control planes to your Argo CD instance. You can then target the control plane and use Argo to sync Crossplane-related objects to it.
+
+Be sure to [configure Argo][configure-argo-1] after it's installed.
+
+## Configure Argo
+
+Argo's default configuration causes it to try to query for resource kinds that don't exist in control planes. You should configure Argo's [general configmap][general-configmap] to include the resource group/kinds which make sense in the context of control planes. example, the concept of `nodes` isn't exposed in control planes.
+
+To configure Argo CD, connect to the cluster where you've installed it and edit the configmap:
+
+```bash
+kubectl edit configmap argocd-cm -n argocd
+```
+
+Adjust the resource inclusions and exclusions under the `data` field of the configmap:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: argocd-cm
+ namespace: argocd
+data:
+ resource.exclusions: |
+ - apiGroups:
+ - "*"
+ kinds:
+ - "*"
+ clusters:
+ - "*"
+ resource.inclusions: |
+ - apiGroups:
+ - "*"
+ kinds:
+ - Provider
+ - Configuration
+ clusters:
+ - "*"
+```
+
+The preceding configuration causes Argo to exclude syncing **all** resource group/kinds--except Crossplane `providers` and `configurations`--for **all** control planes. You're encouraged to adjust the `resource.inclusions` to include the types that make sense for your control plane, such as an `XRD` you've built with Crossplane. You're also encouraged to customize the `clusters` pattern to selectively apply these exclusions/inclusions to control planes (for example, `control-plane-prod-*`).
+
+## Control plane connection secrets
+
+To deploy control planes through Argo CD, you need to configure the `writeConnectionSecretToRef` field in your control plane spec. This field specifies where to store the control plane's `kubeconfig` and makes connection details available to Argo CD.
+
+### Basic Configuration
+
+In your control plane manifest, include the `writeConnectionSecretToRef` field:
+
+```yaml
+apiVersion: spaces.upbound.io/v1beta1
+kind: ControlPlane
+metadata:
+ name: my-control-plane
+ namespace: my-control-plane-group
+spec:
+ writeConnectionSecretToRef:
+ name: kubeconfig-my-control-plane
+ namespace: my-control-plane-group
+ # ... other control plane configuration
+```
+
+### Parameters
+
+The `writeConnectionSecretToRef` field requires two parameters:
+
+- `name`: A unique name for the secret containing the kubeconfig (`kubeconfig-my-control-plane`)
+- `namespace`: The Kubernetes namespace where you store the secret, which must match the metadata namespace. The system copies it into the `argocd` namespace when you set the `features.alpha.argocdPlugin.target.secretNamespace=argocd` configuration parameter.
+
+Control plane labels automatically propagate to the connection secret, which allows you to use label selectors in Argo CD for automated discovery and management.
+
+This configuration enables Argo CD to automatically discover and manage resources on your control planes.
+
+
+[gitops-with-control-planes]: /spaces/howtos/cloud-spaces/gitops
+[configure-argo]: #configure-argo
+[configure-argo-1]: #configure-argo
+[general-configmap]: https://argo-cd.readthedocs.io/en/stable/operator-manual/argocd-cm-yaml/
diff --git a/spaces_versioned_docs/version-1.13/howtos/self-hosted/workload-id/_category_.json b/spaces_versioned_docs/version-1.13/howtos/self-hosted/workload-id/_category_.json
new file mode 100644
index 000000000..c5ecc93f6
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/workload-id/_category_.json
@@ -0,0 +1,11 @@
+{
+ "label": "Workload Identity Configuration",
+ "position": 2,
+ "collapsed": true,
+ "customProps": {
+ "plan": "business"
+ }
+
+}
+
+
diff --git a/spaces_versioned_docs/version-1.13/howtos/self-hosted/workload-id/backup-restore-config.md b/spaces_versioned_docs/version-1.13/howtos/self-hosted/workload-id/backup-restore-config.md
new file mode 100644
index 000000000..935ca69ec
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/workload-id/backup-restore-config.md
@@ -0,0 +1,384 @@
+---
+title: Backup and Restore Workload ID
+weight: 1
+description: Configure workload identity for Spaces Backup and Restore
+---
+import GlobalLanguageSelector, { CodeBlock } from '@site/src/components/GlobalLanguageSelector';
+
+
+
+
+
+
+
+Workload-identity authentication lets you use access policies to grant temporary
+AWS credentials to your Kubernetes pod with a service account. Assigning IAM roles and service accounts allows the pod to assume the IAM role dynamically and much more securely than static credentials.
+
+This guide walks you through creating an IAM trust role policy and applying it
+to your EKS cluster to handle backup and restore storage.
+
+
+
+
+
+Workload-identity authentication lets you use access policies to grant your
+self-hosted Space cluster access to your cloud providers. Workload identity
+authentication grants temporary Azure credentials to your Kubernetes pod based on
+a service account. Assigning managed identities and service accounts allows the pod to
+authenticate with Azure resources dynamically and much more securely than static credentials.
+
+This guide walks you through creating a managed identity and federated credential for your AKS
+cluster to handle backup and restore storage.
+
+
+
+
+
+Workload-identity authentication lets you use access policies to grant your
+self-hosted Space cluster access to your cloud providers. Workload identity
+authentication grants temporary GCP credentials to your Kubernetes pod based on
+a service account. Assigning IAM roles and service accounts allows the pod to
+access cloud resources dynamically and much more securely than static credentials.
+
+This guide walks you through configuring workload identity for your GKE
+cluster to handle backup and restore storage.
+
+
+
+## Prerequisites
+
+
+To set up a workload-identity, you'll need:
+
+
+- A self-hosted Space cluster
+- Administrator access in your cloud provider
+- Helm and `kubectl`
+
+## About the backup and restore component
+
+The `mxp-controller` component handles backup and restore workloads. It needs to
+access your cloud storage to store and retrieve backups. By default, this
+component runs in each control plane's host namespace.
+
+## Configuration
+
+
+
+Upbound supports workload-identity configurations in AWS with IAM Roles for
+Service Accounts and EKS pod identity association.
+
+#### IAM Roles for Service Accounts (IRSA)
+
+With IRSA, you can associate a Kubernetes service account in an EKS cluster with
+an AWS IAM role. Upbound authenticates workloads with that service account as
+the IAM role using temporary credentials instead of static role credentials.
+IRSA relies on AWS `AssumeRoleWithWebIdentity` `STS` to exchange OIDC ID tokens with
+the IAM role's temporary credentials. IRSA uses the `eks.amazon.aws/role-arn`
+annotation to link the service account and the IAM role.
+
+First, create an IAM role with appropriate permissions to access your S3 bucket:
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "s3:GetObject",
+ "s3:PutObject",
+ "s3:ListBucket",
+ "s3:DeleteObject"
+ ],
+ "Resource": [
+ "arn:aws:s3:::${YOUR_BACKUP_BUCKET}",
+ "arn:aws:s3:::${YOUR_BACKUP_BUCKET}/*"
+ ]
+ }
+ ]
+}
+```
+
+Next, ensure your EKS cluster has an OIDC identity provider:
+
+```shell
+eksctl utils associate-iam-oidc-provider --cluster ${YOUR_CLUSTER_NAME} --approve
+```
+
+Configure the IAM role trust policy with the namespace for each
+provisioned control plane.
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "Federated": "arn:aws:iam::${YOUR_AWS_ACCOUNT_ID}:oidc-provider/${YOUR_OIDC_PROVIDER}"
+ },
+ "Action": "sts:AssumeRoleWithWebIdentity",
+ "Condition": {
+ "StringEquals": {
+ "${YOUR_OIDC_PROVIDER}:aud": "sts.amazonaws.com",
+ "${YOUR_OIDC_PROVIDER}:sub": "system:serviceaccount:${YOUR_NAMESPACE}:mxp-controller"
+ }
+ }
+ }
+ ]
+}
+```
+
+In your control plane, pass the `--set` flag with the Spaces Helm chart
+parameters for the Backup and Restore component:
+
+```shell
+--set controlPlanes.mxpController.serviceAccount.annotations."eks\.amazonaws\.com/role-arn"="${SPACES_BR_IAM_ROLE_ARN}"
+```
+
+This command allows the backup and restore component to authenticate with your
+dedicated IAM role in your EKS cluster environment.
+
+#### EKS pod identities
+
+Upbound also supports EKS Pod Identity configuration. EKS Pod Identities allow
+you to create a pod identity association with your Kubernetes namespace, a
+service account, and an IAM role, which allows the EKS control plane to
+automatically handle the credential exchange.
+
+First, create an IAM role with appropriate permissions to access your S3 bucket:
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "s3:GetObject",
+ "s3:PutObject",
+ "s3:ListBucket",
+ "s3:DeleteObject"
+ ],
+ "Resource": [
+ "arn:aws:s3:::${YOUR_BACKUP_BUCKET}",
+ "arn:aws:s3:::${YOUR_BACKUP_BUCKET}/*"
+ ]
+ }
+ ]
+}
+```
+
+When you install or upgrade your Space with Helm, add the backup/restore values:
+
+```shell
+helm upgrade spaces spaces-helm-chart \
+ --set "billing.enabled=true" \
+ --set "backup.enabled=true" \
+ --set "backup.storage.provider=aws" \
+ --set "backup.storage.aws.region= ${YOUR_AWS_REGION}" \
+ --set "backup.storage.aws.bucket= ${YOUR_BACKUP_BUCKET}"
+```
+
+After Upbound provisions your control plane, create a Pod Identity Association
+with the `aws` CLI:
+
+```shell
+aws eks create-pod-identity-association \
+ --cluster-name ${YOUR_CLUSTER_NAME} \
+ --namespace ${YOUR_CONTROL_PLANE_NAMESPACE} \
+ --service-account mxp-controller \
+ --role-arn arn:aws:iam::${YOUR_AWS_ACCOUNT_ID}:role/backup-restore-role
+```
+
+
+
+
+
+Upbound supports workload-identity configurations in Azure with Azure's built-in
+workload identity feature.
+
+#### Prepare your cluster
+
+First, enable the OIDC issuer and workload identity in your AKS cluster:
+
+```shell
+az aks update --resource-group ${YOUR_RESOURCE_GROUP} --name ${YOUR_AKS_CLUSTER_NAME} --enable-oidc-issuer --enable-workload-identity
+```
+
+Next, find and store the OIDC issuer URL as an environment variable:
+
+```shell
+export AKS_OIDC_ISSUER="$(az aks show --name ${YOUR_AKS_CLUSTER_NAME} --resource-group ${YOUR_RESOURCE_GROUP} --query "oidcIssuerProfile.issuerUrl" --output tsv)"
+```
+
+#### Create a User-Assigned Managed Identity
+
+Create a new managed identity to associate with the backup and restore component:
+
+```shell
+az identity create --name backup-restore-identity --resource-group ${YOUR_RESOURCE_GROUP} --location ${YOUR_LOCATION}
+```
+
+Retrieve the client ID and store it as an environment variable:
+
+```shell
+export USER_ASSIGNED_CLIENT_ID="$(az identity show --name backup-restore-identity --resource-group ${YOUR_RESOURCE_GROUP} --query clientId -otsv)"
+```
+
+Grant the managed identity you created to access your Azure Storage account:
+
+```shell
+az role assignment create \
+ --role "Storage Blob Data Contributor" \
+ --assignee ${USER_ASSIGNED_CLIENT_ID} \
+ --scope /subscriptions/${YOUR_SUBSCRIPTION_ID}/resourceGroups/${YOUR_RESOURCE_GROUP}/providers/Microsoft.Storage/storageAccounts/${YOUR_STORAGE_ACCOUNT}
+```
+
+#### Apply the managed identity role
+
+In your control plane, pass the `--set` flag with the Spaces Helm chart
+parameters for the backup and restore component:
+
+```shell
+--set controlPlanes.mxpController.serviceAccount.annotations."azure\.workload\.identity/client-id"="${YOUR_USER_ASSIGNED_CLIENT_ID}"
+--set controlPlanes.mxpController.pod.customLabels."azure\.workload\.identity/use"="true"
+```
+
+#### Create a Federated Identity credential
+
+```shell
+az identity federated-credential create \
+ --name backup-restore-federated-identity \
+ --identity-name backup-restore-identity \
+ --resource-group ${YOUR_RESOURCE_GROUP} \
+ --issuer ${AKS_OIDC_ISSUER} \
+ --subject system:serviceaccount:${YOUR_CONTROL_PLANE_NAMESPACE}:mxp-controller
+```
+
+
+
+
+
+Upbound supports workload-identity configurations in GCP with IAM principal
+identifiers and service account impersonation.
+
+#### Prepare your cluster
+
+First, enable Workload Identity Federation on your GKE cluster:
+
+```shell
+gcloud container clusters update ${YOUR_CLUSTER_NAME} \
+ --workload-pool=${YOUR_PROJECT_ID}.svc.id.goog \
+ --region=${YOUR_REGION}
+```
+
+#### Create a Google Service Account
+
+Create a service account for the backup and restore component:
+
+```shell
+gcloud iam service-accounts create backup-restore-sa \
+ --display-name "Backup Restore Service Account" \
+ --project ${YOUR_PROJECT_ID}
+```
+
+Grant the service account access to your Google Cloud Storage bucket:
+
+```shell
+gcloud projects add-iam-policy-binding ${YOUR_PROJECT_ID} \
+ --member "serviceAccount:backup-restore-sa@${YOUR_PROJECT_ID}.iam.gserviceaccount.com" \
+ --role "roles/storage.objectAdmin"
+```
+
+#### Configure Workload Identity
+
+Create an IAM binding to grant the Kubernetes service account access to the Google service account:
+
+```shell
+gcloud iam service-accounts add-iam-policy-binding \
+ backup-restore-sa@${YOUR_PROJECT_ID}.iam.gserviceaccount.com \
+ --role roles/iam.workloadIdentityUser \
+ --member "serviceAccount:${YOUR_PROJECT_ID}.svc.id.goog[${YOUR_CONTROL_PLANE_NAMESPACE}/mxp-controller]"
+```
+
+#### Apply the service account configuration
+
+In your control plane, pass the `--set` flag with the Spaces Helm chart
+parameters for the backup and restore component:
+
+```shell
+--set controlPlanes.mxpController.serviceAccount.annotations."iam\.gke\.io/gcp-service-account"="backup-restore-sa@${YOUR_PROJECT_ID}.iam.gserviceaccount.com"
+```
+
+
+
+## Verify your configuration
+
+After you apply the configuration use `kubectl` to verify the service account
+has the correct annotation:
+
+```shell
+kubectl get serviceaccount mxp-controller -n ${YOUR_CONTROL_PLANE_NAMESPACE} -o yaml
+```
+
+Verify the `mxp-controller` pod is running:
+
+```shell
+kubectl get pods -n ${YOUR_CONTROL_PLANE_NAMESPACE} | grep mxp-controller
+```
+
+## Restart workload
+
+You must manually restart a workload's pod when you add the workload identity annotations to the running pod's service account.
+
+
+
+This restart enables the EKS pod identity webhook to inject the necessary
+environment for using IRSA.
+
+
+
+
+
+This restart enables the workload identity webhook to inject the necessary
+environment for using Azure workload identity.
+
+
+
+
+
+This restart enables the workload identity webhook to inject the necessary
+environment for using GCP workload identity.
+
+
+
+```shell
+kubectl rollout restart deployment mxp-controller -n ${YOUR_CONTROL_PLANE_NAMESPACE}
+```
+
+## Use cases
+
+
+Configuring backup and restore with workload identity eliminates the need for
+static credentials in your cluster and the overhead of credential rotation.
+These benefits are helpful in:
+
+* Disaster recovery scenarios
+* Control plane migration
+* Compliance requirements
+* Rollbacks after unsuccessful upgrades
+
+## Next steps
+
+Now that you have a workload identity configured for the backup and restore
+component, visit the [Backup Configuration][backup-restore-guide] documentation.
+
+Other workload identity guides are:
+* [Billing][billing]
+* [Shared Secrets][secrets]
+
+[backup-restore-guide]: /spaces/howtos/backup-and-restore
+[billing]: /spaces/howtos/self-hosted/workload-id/billing-config
+[secrets]: /spaces/howtos/self-hosted/workload-id/eso-config
diff --git a/spaces_versioned_docs/version-1.13/howtos/self-hosted/workload-id/billing-config.md b/spaces_versioned_docs/version-1.13/howtos/self-hosted/workload-id/billing-config.md
new file mode 100644
index 000000000..323a6122f
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/workload-id/billing-config.md
@@ -0,0 +1,454 @@
+---
+title: Billing Workload ID
+weight: 1
+description: Configure workload identity for Spaces Billing
+---
+import GlobalLanguageSelector, { CodeBlock } from '@site/src/components/GlobalLanguageSelector';
+
+
+
+
+
+
+
+Workload-identity authentication lets you use access policies to grant your
+self-hosted Space cluster access to your cloud providers. Workload identity
+authentication grants temporary AWS credentials to your Kubernetes pod based on
+a service account. Assigning IAM roles and service accounts allows the pod to
+assume the IAM role dynamically and much more securely than static credentials.
+
+This guide walks you through creating an IAM trust role policy and applying it to your EKS
+cluster for billing in your Space cluster.
+
+
+
+
+
+Workload-identity authentication lets you use access policies to grant your
+self-hosted Space cluster access to your cloud providers. Workload identity
+authentication grants temporary Azure credentials to your Kubernetes pod based on
+a service account. Assigning managed identities and service accounts allows the pod to
+authenticate with Azure resources dynamically and much more securely than static credentials.
+
+This guide walks you through creating a managed identity and federated credential for your AKS
+cluster for billing in your Space cluster.
+
+
+
+
+
+Workload-identity authentication lets you use access policies to grant your
+self-hosted Space cluster access to your cloud providers. Workload identity
+authentication grants temporary GCP credentials to your Kubernetes pod based on
+a service account. Assigning IAM roles and service accounts allows the pod to
+access cloud resources dynamically and much more securely than static
+credentials.
+
+This guide walks you through configuring workload identity for your GKE
+cluster's billing component.
+
+
+
+## Prerequisites
+
+
+To set up a workload-identity, you'll need:
+
+
+- A self-hosted Space cluster
+- Administrator access in your cloud provider
+- Helm and `kubectl`
+
+## About the billing component
+
+The `vector.dev` component handles billing metrics collection in spaces. It
+stores account data in your cloud storage. By default, this component runs in
+each control plane's host namespace.
+
+## Configuration
+
+
+
+Upbound supports workload-identity configurations in AWS with IAM Roles for
+Service Accounts and EKS pod identity association.
+
+#### IAM Roles for Service Accounts (IRSA)
+
+With IRSA, you can associate a Kubernetes service account in an EKS cluster with
+an AWS IAM role. Upbound authenticates workloads with that service account as
+the IAM role using temporary credentials instead of static role credentials.
+IRSA relies on AWS `AssumeRoleWithWebIdentity` `STS` to exchange OIDC ID tokens with
+the IAM role's temporary credentials. IRSA uses the `eks.amazon.aws/role-arn`
+annotation to link the service account and the IAM role.
+
+**Create an IAM role and trust policy**
+
+First, create an IAM role appropriate permissions to access your S3 bucket:
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "s3:GetObject",
+ "s3:PutObject",
+ "s3:ListBucket",
+ "s3:DeleteObject"
+ ],
+ "Resource": [
+ "arn:aws:s3:::${YOUR_BILLING_BUCKET}",
+ "arn:aws:s3:::${YOUR_BILLING_BUCKET}/*"
+ ]
+ }
+ ]
+}
+```
+
+You must configure the IAM role trust policy with the exact match for each
+provisioned control plane. An example of a trust policy for a single control
+plane is below:
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "Federated": "arn:aws:iam::${YOUR_AWS_ACCOUNT_ID}:oidc-provider/${YOUR_OIDC_PROVIDER}"
+ },
+ "Action": "sts:AssumeRoleWithWebIdentity",
+ "Condition": {
+ "StringEquals": {
+ ":aud": "sts.amazonaws.com",
+ ":sub": "system:serviceaccount:${YOUR_NAMESPACE}:vector"
+ }
+ }
+ }
+ ]
+}
+```
+
+**Configure the EKS OIDC provider**
+
+Next, ensure your EKS cluster has an OIDC identity provider:
+
+```shell
+eksctl utils associate-iam-oidc-provider --cluster ${YOUR_CLUSTER_NAME} --approve
+```
+
+**Apply the IAM role**
+
+In your control plane, pass the `--set` flag with the Spaces Helm chart
+parameters for the Billing component:
+
+```shell
+--set "billing.enabled=true"
+--set "billing.storage.provider=aws"
+--set "billing.storage.aws.region=${YOUR_AWS_REGION}"
+--set "billing.storage.aws.bucket=${YOUR_BILLING_BUCKET}"
+--set "billing.storage.secretRef.name="
+--set controlPlanes.vector.serviceAccount.customAnnotations."eks\.amazonaws\.com/role-arn"="arn:aws:iam::${YOUR_AWS_ACCOUNT_ID}:role/${YOUR_BILLING_ROLE_NAME}"
+```
+
+:::important
+You **must** set the `billing.storage.secretRef.name` to an empty string to
+enable workload identity for the billing component
+:::
+
+#### EKS pod identities
+
+Upbound also supports EKS Pod Identity configuration. EKS Pod Identities allow
+you to create a pod identity association with your Kubernetes namespace, a
+service account, and an IAM role, which allows the EKS control plane to
+automatically handle the credential exchange.
+
+**Create an IAM role**
+
+First, create an IAM role appropriate permissions to access your S3 bucket:
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "s3:GetObject",
+ "s3:PutObject",
+ "s3:ListBucket"
+ ],
+ "Resource": [
+ "arn:aws:s3:::${YOUR_BILLING_BUCKET}",
+ "arn:aws:s3:::${YOUR_BILLING_BUCKET}/*"
+ ]
+ }
+ ]
+}
+```
+
+**Configure your Space with Helm**
+
+When you install or upgrade your Space with Helm, add the billing values:
+
+```shell
+helm upgrade spaces spaces-helm-chart \
+ --set "billing.enabled=true" \
+ --set "billing.storage.provider=aws" \
+ --set "billing.storage.aws.region=${YOUR_AWS_REGION}" \
+ --set "billing.storage.aws.bucket=${YOUR_BILLING_BUCKET}" \
+ --set "billing.storage.secretRef.name="
+```
+
+**Create a Pod Identity Association**
+
+After Upbound provisions your control plane, create a Pod Identity Association
+with the `aws` CLI:
+
+```shell
+aws eks create-pod-identity-association \
+ --cluster-name ${YOUR_CLUSTER_NAME} \
+ --namespace ${YOUR_CONTROL_PLANE_NAMESPACE} \
+ --service-account vector \
+ --role-arn arn:aws:iam::${YOUR_AWS_ACCOUNT_ID}:role/${YOUR_BILLING_ROLE_NAME}
+```
+
+
+
+
+
+Upbound supports workload-identity configurations in Azure with Azure's built-in
+workload identity feature.
+
+First, enable the OIDC issuer and workload identity in your AKS cluster:
+
+```shell
+az aks update --resource-group ${YOUR_RESOURCE_GROUP} --name ${YOUR_AKS_CLUSTER_NAME} --enable-oidc-issuer --enable-workload-identity
+```
+
+Next, find and store the OIDC issuer URL as an environment variable:
+
+```shell
+export AKS_OIDC_ISSUER="$(az aks show --name ${YOUR_AKS_CLUSTER_NAME} --resource-group ${YOUR_RESOURCE_GROUP} --query "oidcIssuerProfile.issuerUrl" --output tsv)"
+```
+
+Create a new managed identity to associate with the billing component:
+
+```shell
+az identity create --name billing-identity --resource-group ${YOUR_RESOURCE_GROUP} --location ${YOUR_LOCATION}
+```
+
+Retrieve the client ID and store it as an environment variable:
+
+```shell
+export USER_ASSIGNED_CLIENT_ID="$(az identity show --name billing-identity --resource-group ${YOUR_RESOURCE_GROUP} --query clientId -otsv)"
+```
+
+Grant the managed identity you created to access your Azure Storage account:
+
+```shell
+az role assignment create --role "Storage Blob Data Contributor" --assignee $USER_ASSIGNED_CLIENT_ID --scope /subscriptions/${YOUR_SUBSCRIPTION_ID}/resourceGroups/${YOUR_RESOURCE_GROUP}/providers/Microsoft.Storage/storageAccounts/${YOUR_STORAGE_ACCOUNT}
+```
+
+In your control plane, pass the `--set` flag with the Spaces Helm chart
+parameters for the billing component:
+
+```shell
+--set "billing.enabled=true"
+--set "billing.storage.provider=azure"
+--set "billing.storage.azure.storageAccount=${SPACES_BILLING_STORAGE_ACCOUNT}"
+--set "billing.storage.azure.container=${SPACES_BILLING_STORAGE_CONTAINER}"
+--set "billing.storage.secretRef.name="
+--set controlPlanes.vector.serviceAccount.customAnnotations."azure\.workload\.identity/client-id"="${SPACES_BILLING_APP_ID}"
+--set controlPlanes.vector.pod.customLabels."azure\.workload\.identity/use"="true"
+```
+
+Create a federated credential to establish trust between the managed identity
+and your AKS OIDC provider:
+
+```shell
+az identity federated-credential create \
+ --name billing-federated-identity \
+ --identity-name billing-identity \
+ --resource-group ${YOUR_RESOURCE_GROUP} \
+ --issuer ${AKS_OIDC_ISSUER} \
+ --subject system:serviceaccount:${YOUR_CONTROL_PLANE_NAMESPACE}:vector
+```
+
+
+
+
+
+Upbound supports workload-identity configurations in GCP with IAM principal
+identifiers or service account impersonation.
+
+#### IAM principal identifiers
+
+IAM principal identifiers allow you to grant permissions directly to
+Kubernetes service accounts without additional annotation. Upbound recommends
+this approach for ease-of-use and flexibility.
+
+First, enable Workload Identity Federation on your GKE cluster:
+
+```shell
+gcloud container clusters update ${YOUR_CLUSTER_NAME} \
+ --workload-pool=${YOUR_PROJECT_ID}.svc.id.goog \
+ --region=${YOUR_REGION}
+```
+
+Next, configure your Spaces installation with the Spaces Helm chart parameters:
+
+```shell
+--set "billing.enabled=true"
+--set "billing.storage.provider=gcp"
+--set "billing.storage.gcp.bucket=${YOUR_BILLING_BUCKET}"
+--set "billing.storage.secretRef.name="
+```
+
+:::important
+You **must** set the `billing.storage.secretRef.name` to an empty string to
+enable workload identity for the billing component.
+:::
+
+Grant the necessary permissions to your Kubernetes service account:
+
+```shell
+gcloud projects add-iam-policy-binding ${YOUR_PROJECT_ID} \
+ --member="principalSet://iam.googleapis.com/projects/${YOUR_PROJECT_NUMBER}/locations/global/workloadIdentityPools/${YOUR_PROJECT_ID}.svc.id.goog/attribute.kubernetes_namespace/${YOUR_CONTROL_PLANE_NAMESPACE}/attribute.kubernetes_service_account/vector" \
+ --role="roles/storage.objectAdmin"
+```
+
+Enable uniform bucket-level access on your storage bucket:
+
+```shell
+gcloud storage buckets update gs://${YOUR_BILLING_BUCKET} --uniform-bucket-level-access
+```
+
+#### Service account impersonation
+
+Service account impersonation allows you to link a Kubernetes service account to
+a GCP service account. The Kubernetes service account assumes the permissions of
+the GCP service account you specify.
+
+Enable workload id federation on your GKE cluster:
+
+```shell
+gcloud container clusters update ${YOUR_CLUSTER_NAME} \
+ --workload-pool=${YOUR_PROJECT_ID}.svc.id.goog \
+ --region=${YOUR_REGION}
+```
+
+Next, create a dedicated service account for your billing operations:
+
+```shell
+gcloud iam service-accounts create billing-sa \
+ --project=${YOUR_PROJECT_ID}
+```
+
+Grant storage permissions to the service account you created:
+
+```shell
+gcloud projects add-iam-policy-binding ${YOUR_PROJECT_ID} \
+ --member="serviceAccount:billing-sa@${YOUR_PROJECT_ID}.iam.gserviceaccount.com" \
+ --role="roles/storage.objectAdmin"
+```
+
+Link the Kubernetes service account to the GCP service account:
+
+```shell
+gcloud iam service-accounts add-iam-policy-binding \
+ billing-sa@${YOUR_PROJECT_ID}.iam.gserviceaccount.com \
+ --role="roles/iam.workloadIdentityUser" \
+ --member="serviceAccount:${YOUR_PROJECT_ID}.svc.id.goog[${YOUR_CONTROL_PLANE_NAMESPACE}/vector]"
+```
+
+In your control plane, pass the `--set` flag with the Spaces Helm chart
+parameters for the billing component:
+
+```shell
+--set "billing.enabled=true"
+--set "billing.storage.provider=gcp"
+--set "billing.storage.gcp.bucket=${YOUR_BILLING_BUCKET}"
+--set "billing.storage.secretRef.name="
+--set controlPlanes.vector.serviceAccount.customAnnotations."iam\.gke\.io/gcp-service-account"="billing-sa@${YOUR_PROJECT_ID}.iam.gserviceaccount.com"
+```
+
+
+
+## Verify your configuration
+
+After you apply the configuration use `kubectl` to verify the service account
+has the correct annotation:
+
+```shell
+kubectl get serviceaccount vector -n ${YOUR_CONTROL_PLANE_NAMESPACE} -o yaml
+```
+
+Verify the `vector` pod is running:
+
+```shell
+kubectl get pods -n ${YOUR_CONTROL_PLANE_NAMESPACE} | grep vector
+```
+
+## Restart workload
+
+
+
+You must manually restart a workload's pod when you add the
+`eks.amazonaws.com/role-arn key` annotation to the running pod's service
+account.
+
+This restart enables the EKS pod identity webhook to inject the necessary
+environment for using IRSA.
+
+
+
+
+
+You must manually restart a workload's pod when you add the workload identity annotations to the running pod's service account.
+
+This restart enables the workload identity webhook to inject the necessary
+environment for using Azure workload identity.
+
+
+
+
+
+GCP workload identity doesn't require pod restarts after configuration changes.
+If you do need to restart the workload, use the `kubectl` command to force the
+component restart:
+
+
+
+```shell
+kubectl rollout restart deployment vector
+```
+
+
+## Use cases
+
+
+Using workload identity authentication for billing eliminates the need for static
+credentials in your cluster as well as the overhead of credential rotation.
+These benefits are helpful in:
+
+* Resource usage tracking across teams/projects
+* Cost allocation for multi-tenant environments
+* Financial auditing requirements
+* Capacity billing and resource optimization
+* Automated billing workflows
+
+## Next steps
+
+Now that you have workload identity configured for the billing component, visit
+the [Billing guide][billing-guide] for more information.
+
+Other workload identity guides are:
+* [Backup and restore][backuprestore]
+* [Shared Secrets][secrets]
+
+[billing-guide]: /spaces/howtos/self-hosted/billing
+[backuprestore]: /spaces/howtos/self-hosted/workload-id/backup-restore-config
+[secrets]: /spaces/howtos/self-hosted/workload-id/eso-config
diff --git a/spaces_versioned_docs/version-1.13/howtos/self-hosted/workload-id/eso-config.md b/spaces_versioned_docs/version-1.13/howtos/self-hosted/workload-id/eso-config.md
new file mode 100644
index 000000000..c1418c171
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/self-hosted/workload-id/eso-config.md
@@ -0,0 +1,503 @@
+---
+title: Shared Secrets Workload ID
+weight: 1
+description: Configure workload identity for Spaces Shared Secrets
+---
+import GlobalLanguageSelector, { CodeBlock } from '@site/src/components/GlobalLanguageSelector';
+
+
+
+
+
+
+
+Workload-identity authentication lets you use access policies to grant your
+self-hosted Space cluster access to your cloud providers. Workload identity
+authentication grants temporary AWS credentials to your Kubernetes pod based on
+a service account. Assigning IAM roles and service accounts allows the pod to
+assume the IAM role dynamically and much more securely than static credentials.
+
+This guide walks you through creating an IAM trust role policy and applying it to your EKS
+cluster for secret sharing with Kubernetes.
+
+
+
+
+
+Workload-identity authentication lets you use access policies to grant your
+self-hosted Space cluster access to your cloud providers. Workload identity
+authentication grants temporary Azure credentials to your Kubernetes pod based on
+a service account. Assigning managed identities and service accounts allows the pod to
+authenticate with Azure resources dynamically and much more securely than static credentials.
+
+This guide walks you through creating a managed identity and federated credential for your AKS
+cluster for shared secrets in your Space cluster.
+
+
+
+
+
+Workload-identity authentication lets you use access policies to grant your
+self-hosted Space cluster access to your cloud providers. Workload identity
+authentication grants temporary GCP credentials to your Kubernetes pod based on
+a service account. Assigning IAM roles and service accounts allows the pod to
+access cloud resources dynamically and much more securely than static
+credentials.
+
+This guide walks you through configuring workload identity for your GKE
+cluster's Shared Secrets component.
+
+
+
+## Prerequisites
+
+
+To set up a workload-identity, you'll need:
+
+
+- A self-hosted Space cluster
+- Administrator access in your cloud provider
+- Helm and `kubectl`
+
+
+## About the Shared Secrets component
+
+
+
+
+The External Secrets Operator (ESO) runs in each control plane's host namespace as `external-secrets-controller`. It needs to access
+your external secrets management service like AWS Secrets Manager.
+
+To configure your shared secrets workflow controller, you must:
+
+* Annotate the Kubernetes service account to associate it with a cloud-side
+ principal (such as an IAM role, service account, or enterprise application). The workload must then
+ use this service account.
+* Label the workload (pod) to allow the injection of a temporary credential set,
+ enabling authentication.
+
+
+
+
+
+The External Secrets Operator (ESO) component runs in each control plane's host
+namespace as `external-secrets-controller`. It synchronizes secrets from
+external APIs into Kubernetes secrets. Shared secrets allow you to manage
+credentials outside your Kubernetes cluster while making them available to your
+application
+
+
+
+
+
+The External Secrets Operator (ESO) component runs in each control plane's host
+namespace as `external-secrets-controller`. It synchronizes secrets from
+external APIs into Kubernetes secrets. Shared secrets allow you to manage
+credentials outside your Kubernetes cluster while making them available to your
+application
+
+
+
+## Configuration
+
+
+
+Upbound supports workload-identity configurations in AWS with IAM Roles for
+Service Accounts or EKS pod identity association.
+
+#### IAM Roles for Service Accounts (IRSA)
+
+With IRSA, you can associate a Kubernetes service account in an EKS cluster with
+an AWS IAM role. Upbound authenticates workloads with that service account as
+the IAM role using temporary credentials instead of static role credentials.
+IRSA relies on AWS `AssumeRoleWithWebIdentity` `STS` to exchange OIDC ID tokens with
+the IAM role's temporary credentials. IRSA uses the `eks.amazon.aws/role-arn`
+annotation to link the service account and the IAM role.
+
+**Create an IAM role and trust policy**
+
+First, create an IAM role with appropriate permissions to access AWS Secrets Manager:
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "secretsmanager:GetSecretValue",
+ "secretsmanager:DescribeSecret",
+ "ssm:GetParameter"
+ ],
+ "Resource": [
+ "arn:aws:secretsmanager:${YOUR_REGION}:${YOUR_AWS_ACCOUNT_ID}:secret:${YOUR_SECRET_PREFIX}*",
+ "arn:aws:ssm:${YOUR_REGION}:${YOUR_AWS_ACCOUNT_ID}:parameter/${YOUR_PARAMETER_PREFIX}*"
+ ]
+ }
+ ]
+}
+```
+
+You must configure the IAM role trust policy with the exact match for each
+provisioned control plane. An example of a trust policy for a single control
+plane is below:
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "Federated": "arn:aws:iam::${YOUR_AWS_ACCOUNT_ID}:oidc-provider/${YOUR_OIDC_PROVIDER}"
+ },
+ "Action": "sts:AssumeRoleWithWebIdentity",
+ "Condition": {
+ "StringEquals": {
+ ":aud": "sts.amazonaws.com"
+ },
+ "StringLike": {
+ ":sub": "system:serviceaccount:*:external-secrets-controller"
+ }
+ }
+ }
+ ]
+}
+```
+
+**Configure the EKS OIDC provider**
+
+Next, ensure your EKS cluster has an OIDC identity provider:
+
+```shell
+eksctl utils associate-iam-oidc-provider --cluster ${YOUR_CLUSTER_NAME} --approve
+```
+
+**Apply the IAM role**
+
+In your control plane, pass the `--set` flag with the Spaces Helm chart
+parameters for the shared secrets component:
+
+```yaml
+--set controlPlanes.sharedSecrets.serviceAccount.customAnnotations."eks\.amazonaws\.com/role-arn"="arn:aws:iam::${YOUR_AWS_ACCOUNT_ID}:role/${YOUR_ESO_ROLE_NAME}"
+```
+
+This command allows the shared secrets component to authenticate with your
+dedicated IAM role in your EKS cluster environment.
+
+#### EKS pod identities
+
+Upbound also supports EKS Pod Identity configuration. EKS Pod Identities allow
+you to create a pod identity association with your Kubernetes namespace, a
+service account, and an IAM role, which allows the EKS control plane to
+automatically handle the credential exchange.
+
+**Create an IAM role**
+
+First, create an IAM role with appropriate permissions to access AWS Secrets Manager:
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "secretsmanager:GetSecretValue",
+ "secretsmanager:DescribeSecret",
+ "ssm:GetParameter"
+ ],
+ "Resource": [
+ "arn:aws:secretsmanager:${YOUR_AWS_REGION}:${YOUR_AWS_ACCOUNT_ID}:secret:${YOUR_SECRET_PREFIX}*",
+ "arn:aws:ssm:${YOUR_AWS_REGION}:${YOUR_AWS_ACCOUNT_ID}:parameter/${YOUR_PARAMETER_PREFIX}*"
+ ]
+ }
+ ]
+}
+```
+
+**Configure your Space with Helm**
+
+When you install or upgrade your Space with Helm, add the shared secrets value:
+
+```shell
+helm upgrade spaces spaces-helm-chart \
+ --set "sharedSecrets.enabled=true"
+```
+
+**Create a Pod Identity Association**
+
+After Upbound provisions your control plane, create a Pod Identity Association
+with the `aws` CLI:
+
+```shell
+aws eks create-pod-identity-association \
+ --cluster-name ${YOUR_CLUSTER_NAME} \
+ --namespace ${YOUR_CONTROL_PLANE_NAMESPACE} \
+ --service-account external-secrets-controller \
+ --role-arn arn:aws:iam::${YOUR_AWS_ACCOUNT_ID}:role/${YOUR_ROLE_NAME}
+```
+
+
+
+
+
+Upbound supports workload-identity configurations in Azure with Azure's built-in
+workload identity feature.
+
+First, enable the OIDC issuer and workload identity in your AKS cluster:
+
+```shell
+az aks update --resource-group ${YOUR_RESOURCE_GROUP} --name ${YOUR_AKS_CLUSTER_NAME} --enable-oidc-issuer --enable-workload-identity
+```
+
+Next, find and store the OIDC issuer URL as an environment variable:
+
+```shell
+export AKS_OIDC_ISSUER="$(az aks show --name ${YOUR_AKS_CLUSTER_NAME} --resource-group ${YOUR_RESOURCE_GROUP} --query "oidcIssuerProfile.issuerUrl" --output tsv)"
+```
+
+Create a new managed identity to associate with the shared secrets component:
+
+```shell
+az identity create --name secrets-identity --resource-group ${YOUR_RESOURCE_GROUP} --location ${YOUR_LOCATION}
+```
+
+Retrieve the client ID and store it as an environment variable:
+
+```shell
+export USER_ASSIGNED_CLIENT_ID="$(az identity show --name secrets-identity --resource-group ${YOUR_RESOURCE_GROUP} --query clientId -otsv)"
+```
+
+Grant the managed identity you created to access your Azure Storage account:
+
+```shell
+az keyvault set-policy --name ${YOUR_KEY_VAULT_NAME} \
+ --resource-group ${YOUR_RESOURCE_GROUP} \
+ --object-id $(az identity show --name secrets-identity --resource-group ${YOUR_RESOURCE_GROUP} --query principalId -otsv) \
+ --secret-permissions get list
+```
+
+In your control plane, pass the `--set` flag with the Spaces Helm chart
+parameters for the shared secrets component:
+
+```shell
+--set controlPlanes.sharedSecrets.serviceAccount.customAnnotations."azure\.workload\.identity/client-id"="${USER_ASSIGNED_CLIENT_ID}"
+--set controlPlanes.sharedSecrets.pod.customLabels."azure\.workload\.identity/use"="true"
+```
+
+Next, create a federated credential to establish trust between the managed identity
+and your AKS OIDC provider:
+
+```shell
+az identity federated-credential create \
+ --name secrets-federated-identity \
+ --identity-name secrets-identity \
+ --resource-group ${YOUR_RESOURCE_GROUP} \
+ --issuer ${AKS_OIDC_ISSUER} \
+ --subject system:serviceaccount:${YOUR_CONTROL_PLANE_NAMESPACE}:external-secrets-controller
+```
+
+
+
+
+
+Upbound supports workload-identity configurations in GCP with IAM principal
+identifiers or service account impersonation.
+
+#### IAM principal identifiers
+
+IAM principal identifiers allow you to grant permissions directly to
+Kubernetes service accounts without additional annotation. Upbound recommends
+this approach for ease-of-use and flexibility.
+
+First, enable Workload Identity Federation on your GKE cluster:
+
+```shell
+gcloud container clusters update ${YOUR_CLUSTER_NAME} \
+ --workload-pool=${YOUR_PROJECT_ID}.svc.id.goog \
+ --region=${YOUR_REGION}
+```
+
+Next, grant the necessary permissions to your Kubernetes service account:
+
+```shell
+gcloud projects add-iam-policy-binding ${YOUR_PROJECT_ID} \
+ --member="principalSet://iam.googleapis.com/projects/${YOUR_PROJECT_NUMBER}/locations/global/workloadIdentityPools/${YOUR_PROJECT_ID}.svc.id.goog/attribute.kubernetes_namespace/${YOUR_CONTROL_PLANE_NAMESPACE}/attribute.kubernetes_service_account/external-secrets-controller" \
+ --role="roles/secretmanager.secretAccessor"
+```
+
+#### Service account impersonation
+
+Service account impersonation allows you to link a Kubernetes service account to
+a GCP service account. The Kubernetes service account assumes the permissions of
+the GCP service account you specify.
+
+Enable workload id federation on your GKE cluster:
+
+```shell
+gcloud container clusters update ${YOUR_CLUSTER_NAME} \
+ --workload-pool=${YOUR_PROJECT_ID}.svc.id.goog \
+ --region=${YOUR_REGION}
+```
+
+Next, create a dedicated service account for your secrets operations:
+
+```shell
+gcloud iam service-accounts create secrets-sa \
+ --project=${YOUR_PROJECT_ID}
+```
+
+Grant secret access permissions to the service account you created:
+
+```shell
+gcloud projects add-iam-policy-binding ${YOUR_PROJECT_ID} \
+ --member="serviceAccount:secrets-sa@${YOUR_PROJECT_ID}.iam.gserviceaccount.com" \
+ --role="roles/secretmanager.secretAccessor"
+```
+
+Link the Kubernetes service account to the GCP service account:
+
+```shell
+gcloud iam service-accounts add-iam-policy-binding \
+ secrets-sa@${YOUR_PROJECT_ID}.iam.gserviceaccount.com \
+ --role="roles/iam.workloadIdentityUser" \
+ --member="serviceAccount:${YOUR_PROJECT_ID}.svc.id.goog[${YOUR_CONTROL_PLANE_NAMESPACE}/external-secrets-controller]"
+```
+
+In your control plane, pass the `--set` flag with the Spaces Helm chart
+parameters for the shared secrets component:
+
+```shell
+--set controlPlanes.sharedSecrets.serviceAccount.customAnnotations."iam\.gke\.io/gcp-service-account"="secrets-sa@${YOUR_PROJECT_ID}.iam.gserviceaccount.com"
+```
+
+
+
+## Verify your configuration
+
+After you apply the configuration use `kubectl` to verify the service account
+has the correct annotation:
+
+```shell
+kubectl get serviceaccount external-secrets-controller -n ${YOUR_CONTROL_PLANE_NAMESPACE} -o yaml
+```
+
+
+
+Verify the `external-secrets` pod is running correctly:
+
+```shell
+kubectl get pods -n ${YOUR_CONTROL_PLANE_NAMESPACE} | grep external-secrets
+```
+
+
+
+
+
+Verify the External Secrets Operator pod is running correctly:
+
+```shell
+kubectl get pods -n ${YOUR_CONTROL_PLANE_NAMESPACE} | grep external-secrets
+```
+
+
+
+
+
+Verify the `external-secrets` pod is running correctly:
+
+```shell
+kubectl get pods -n ${YOUR_CONTROL_PLANE_NAMESPACE} | grep external-secrets
+```
+
+
+
+## Restart workload
+
+
+
+You must manually restart a workload's pod when you add the
+`eks.amazonaws.com/role-arn key` annotation to the running pod's service
+account.
+
+This restart enables the EKS pod identity webhook to inject the necessary
+environment for using IRSA.
+
+
+
+
+
+You must manually restart a workload's pod when you add the workload identity annotations to the running pod's service account.
+
+This restart enables the workload identity webhook to inject the necessary
+environment for using Azure workload identity.
+
+
+
+
+
+GCP workload identity doesn't require pod restarts after configuration changes.
+If you do need to restart the workload, use the `kubectl` command to force the
+component restart:
+
+
+
+```shell
+kubectl rollout restart deployment external-secrets
+```
+
+## Use cases
+
+
+
+
+Shared secrets with workload identity eliminates the need for static credentials
+in your cluster. These benefits are particularly helpful in:
+
+* Secure application credentials management
+* Database connection string storage
+* API token management
+* Compliance with secret rotation security standards
+* Multi-environment configuration with centralized secret management
+
+
+
+
+
+Using workload identity authentication for shared secrets eliminates the need for static
+credentials in your cluster as well as the overhead of credential rotation.
+These benefits are particularly helpful in:
+
+* Secure application credentials management
+* Database connection string storage
+* API token management
+* Compliance with secret rotation security standards
+
+
+
+
+
+Configuring the external secrets operator with workload identity eliminates the need for
+static credentials in your cluster and the overhead of credential rotation.
+These benefits are particularly helpful in:
+
+* Secure application credentials management
+* Database connection string storage
+* API token management
+* Compliance with secret rotation security standards
+
+
+
+## Next steps
+
+Now that you have workload identity configured for the shared secrets component, visit
+the [Shared Secrets][eso-guide] guide for more information.
+
+Other workload identity guides are:
+* [Backup and restore][backuprestore]
+* [Billing][billing]
+
+[eso-guide]: /spaces/howtos/secrets-management
+[backuprestore]: /spaces/howtos/self-hosted/workload-id/backup-restore-config
+[billing]: /spaces/howtos/self-hosted/workload-id/billing-config
diff --git a/spaces_versioned_docs/version-1.13/howtos/simulations.md b/spaces_versioned_docs/version-1.13/howtos/simulations.md
new file mode 100644
index 000000000..537906b8d
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/howtos/simulations.md
@@ -0,0 +1,105 @@
+---
+title: Simulate changes to your Control Plane Projects
+sidebar_position: 100
+description: Use the Up CLI to mock operations before deploying to your environments.
+---
+
+
+:::important
+The Simulations feature is in private preview. more information, [reach out to Upbound][reach-out-to-upbound].
+:::
+
+Control plane simulations allow you to preview changes to your resources before
+applying them to your control planes. Like a plan or dry-run operation,
+simulations expose the impact of updates to compositions or claims without
+changing your actual resources.
+
+A control plane simulation creates a temporary copy of your control plane and
+returns a preview of the desired changes. The simulation change plan helps you
+reduce the risk of unexpected behavior based on your changes.
+
+## Simulation benefits
+
+Control planes are dynamic systems that automatically reconcile resources to
+match your desired state. Simulations provide visibility into this
+reconciliation process by showing:
+
+
+* New resources to create
+* Existing resources to change
+* Existing resources to delete
+* How configuration changes propagate through the system
+
+These insights are crucial when planning complex changes or upgrading Crossplane
+packages.
+
+## Requirements
+
+Simulations are available to select customers on Upbound Cloud with Team
+Tier or higher. more information, [reach out to Upbound][reach-out-to-upbound-1].
+
+## How to simulate your control planes
+
+Before you start a simulation, build your project and use the `up
+project run` command to run your control plane.
+
+Use the `up project simulate` command with your control plane name to start the
+simulation:
+
+```ini {copy-lines="all"}
+up project simulate --complete-after=60s --terminate-on-finish
+```
+
+The `complete-after` flag determines how long to run the simulation before it completes and calculates the results. Depending on the change, a simulation may not complete within your defined interval leaving unaffected resources as `unchanged`.
+
+The `terminate-on-finish` flag terminates the simulation after the time
+you set - deleting the control plane that ran the simulation.
+
+At the end of your simulation, your CLI returns:
+* A summary of the resources created, modified, or deleted
+* Diffs for each resource affected
+
+## View your simulation in the Upbound Console
+You can also view your simulation results in the Upbound Console:
+
+1. Navigate to your base control plane in the Upbound Console
+2. Select the "Simulations" tab in the menu
+3. Select a simulation object for a change list of all
+ resources affected.
+
+The Console provides visual indications of changes:
+
+- Created Resources: Marked with green
+- Modified Resources: Marked with yellow
+- Deleted Resources: Marked with red
+- Unchanged Resources: Displayed in gray
+
+
+
+## Considerations
+
+Simulations is a **private preview** feature.
+
+Be aware of the following limitations:
+
+- Simulations can't predict the exact behavior of external systems due to the
+ complexity and non-deterministic reconciliation pattern in Crossplane.
+
+- The only completion criteria for a simulation is time. Your simulation may not
+ receive a conclusive result within that interval. Upbound recommends the
+ default `60s` value.
+
+- Providers don't run in simulations. Simulations can't compose resources that
+ rely on the status of Managed Resources.
+
+
+The Upbound team is working to improve these limitations. Your feedback is always appreciated.
+
+## Next steps
+
+For more information, follow the [tutorial][tutorial] on Simulations.
+
+
+[tutorial]: /manuals/cli/howtos/simulations
+[reach-out-to-upbound]: https://www.upbound.io/contact-us
+[reach-out-to-upbound-1]: https://www.upbound.io/contact-us
diff --git a/spaces_versioned_docs/version-1.13/overview/_category_.json b/spaces_versioned_docs/version-1.13/overview/_category_.json
new file mode 100644
index 000000000..54bb16430
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/overview/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Overview",
+ "position": 0
+}
diff --git a/spaces_versioned_docs/version-1.13/overview/index.md b/spaces_versioned_docs/version-1.13/overview/index.md
new file mode 100644
index 000000000..143f02bec
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/overview/index.md
@@ -0,0 +1,17 @@
+---
+title: Spaces Overview
+sidebar_position: 0
+---
+
+# Upbound Spaces
+
+Welcome to the Upbound Spaces documentation. This section contains comprehensive
+documentation for Spaces API and Spaces operations across all supported
+versions.
+
+
+## Get Started
+
+- **[Concepts](/spaces/concepts/control-planes)** - Core concepts for Spaces
+- **[How-To Guides](/spaces/howtos/auto-upgrade)** - Step-by-step guides for operating Spaces
+- **[API Reference](/spaces/reference/)** - API specifications and resources
diff --git a/spaces_versioned_docs/version-1.13/reference/_category_.json b/spaces_versioned_docs/version-1.13/reference/_category_.json
new file mode 100644
index 000000000..4a6a139c4
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/reference/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Spaces API",
+ "position": 1,
+ "collapsed": true
+}
diff --git a/spaces_versioned_docs/version-1.13/reference/index.md b/spaces_versioned_docs/version-1.13/reference/index.md
new file mode 100644
index 000000000..5e68b0768
--- /dev/null
+++ b/spaces_versioned_docs/version-1.13/reference/index.md
@@ -0,0 +1,72 @@
+---
+title: Spaces API Reference
+description: Documentation for the Spaces API resources (v1.15 - Latest)
+sidebar_position: 1
+---
+import CrdDocViewer from '@site/src/components/CrdViewer';
+
+
+This page documents the Custom Resource Definitions (CRDs) for the Spaces API.
+
+
+## Control Planes
+### Control Planes
+
+
+## Observability
+### Shared Telemetry Configs
+
+
+## `pkg`
+### Controller Revisions
+
+
+### Controller Runtime Configs
+
+
+### Controllers
+
+
+### Remote Configuration Revisions
+
+
+### Remote Configurations
+
+
+## Policy
+### Shared Upbound Policies
+
+
+## References
+### Referenced Objects
+
+
+## Scheduling
+### Environments
+
+
+## Secrets
+### Shared External Secrets
+
+
+### Shared Secret Stores
+
+
+## Simulations
+
+
+## Spaces Backups
+### Backups
+
+
+### Backup Schedules
+
+
+### Shared Backup Configs
+
+
+### Shared Backups
+
+
+### Shared Backup Schedules
+
diff --git a/spaces_versioned_docs/version-1.14/concepts/_category_.json b/spaces_versioned_docs/version-1.14/concepts/_category_.json
new file mode 100644
index 000000000..4b8667e29
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/concepts/_category_.json
@@ -0,0 +1,7 @@
+{
+ "label": "Concepts",
+ "position": 2,
+ "collapsed": true
+}
+
+
diff --git a/spaces_versioned_docs/version-1.14/concepts/control-planes.md b/spaces_versioned_docs/version-1.14/concepts/control-planes.md
new file mode 100644
index 000000000..76c6386c8
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/concepts/control-planes.md
@@ -0,0 +1,222 @@
+---
+title: Control Planes
+weight: 1
+description: An overview of control planes in Upbound
+---
+
+
+Control planes in Upbound are fully isolated Crossplane control plane instances that Upbound manages for you. This means:
+
+- the underlying lifecycle of infrastructure (compute, memory, and storage) required to power your instance.
+- scaling of the infrastructure.
+- the maintenance of the core Crossplane components that make up a control plane.
+
+This lets users focus on building their APIs and operating their control planes, while Upbound handles the rest. Each control plane has its own dedicated API server connecting users to their control plane.
+
+
+## Control plane architecture
+
+
+
+Along with underlying infrastructure, Upbound manages the Crossplane system components. You don't need to manage the Crossplane API server or core resource controllers because Upbound manages your control plane lifecycle from creation to deletion.
+
+### Crossplane API
+
+Each control plane offers a unified endpoint. You interact with your control plane through Kubernetes and Crossplane API calls. Each control plane runs a Kubernetes API server to handle API requests. You can make API calls in the following ways:
+
+- Direct calls: HTTP/gRPC
+- Indirect calls: the up CLI, Kubernetes clients such as kubectl, or the Upbound Console.
+
+Like in Kubernetes, the API server is the hub for all communication for the control plane. All internal components such as system processes and provider controllers act as clients of the API server.
+
+Your API requests tell Crossplane your desired state for the resources your control plane manages. Crossplane attempts to constantly maintain that state. Crossplane lets you configure objects in the API either imperatively or declaratively.
+
+### Crossplane versions and features
+
+Upbound automatically upgrades Crossplane system components on control planes to new Crossplane versions for updated features and improvements in the open source project. With [automatic upgrades][automatic-upgrades], you choose the cadence that Upbound automatically upgrades the system components in your control plane. You can also choose to manually upgrade your control plane to a different Crossplane version.
+
+For detailed information on versions and upgrades, refer to the [release notes][release-notes] and the automatic upgrade documentation. If you don't enroll a control plane in a release channel, Upbound doesn't apply automatic upgrades.
+
+Features considered "alpha" in Crossplane are by default not supported in a control plane unless otherwise specified.
+
+### Hosting environments
+
+Every control plane in Upbound belongs to a [control plane group][control-plane-group]. Control plane groups are a logical grouping of one or more control planes with shared objects (such as secrets or backup configuration). Every group resides in a [Space][space] in Upbound, which are hosting environments for control planes.
+
+Think of a Space as being conceptually the same as an AWS, Azure, or GCP region. Regardless of the Space type you run a control plane in, the core experience is identical.
+
+## Management
+
+### Create a control plane
+
+You can create a new control plane from the Upbound Console, [up CLI][up-cli], or with Kubernetes clients such as `kubectl`.
+
+
+
+
+
+To use the CLI, run the following:
+
+```shell
+up ctp create
+```
+
+To learn more about control plane-related commands in `up`, go to the [CLI reference][cli-reference] documentation.
+
+
+
+You can create and manage control planes declaratively in Upbound. Before you
+begin, ensure you're logged into Upbound and set the correct context:
+
+```bash
+up login
+# Example: acmeco/upbound-gcp-us-west-1/default
+up ctx ${yourOrganization}/${yourSpace}/${yourGroup}
+````
+
+```yaml
+#controlplane-a.yaml
+apiVersion: spaces.upbound.io/v1beta1
+kind: ControlPlane
+metadata:
+ name: controlplane-a
+spec:
+ crossplane:
+ autoUpgrade:
+ channel: Rapid
+```
+
+```bash
+kubectl apply -f controlplane-a.yaml
+```
+
+
+
+
+
+### Connect directly to your control plane
+
+Each control plane offers a unified endpoint. You interact with your control plane through Kubernetes and Crossplane API calls. Each control plane runs a Kubernetes API server to handle API requests.
+
+You can connect to a control plane's API server directly via the up CLI. Use the [`up ctx`][up-ctx] command to set your kubeconfig's current context to a control plane:
+
+```shell
+# Example: acmeco/upbound-gcp-us-west-1/default/ctp1
+up ctx ${yourOrganization}/${yourSpace}/${yourGroup}/${yourControlPlane}
+```
+
+To disconnect from your control plane and revert your kubeconfig's current context to the previous entry, run the following:
+
+```shell
+up ctx ..
+```
+
+You can also generate a `kubeconfig` file for a control plane with [`up ctx -f`][up-ctx-f].
+
+```shell
+up ctx ${yourOrganization}/${yourSpace}/${yourGroup}/${yourControlPlane} -f - > ctp-kubeconfig.yaml
+```
+
+:::tip
+To learn more about how to use `up ctx` to navigate different contexts in Upbound, read the [CLI documentation][cli-documentation].
+:::
+
+## Configuration
+
+When you create a new control plane, Upbound provides you with a fully isolated instance of Crossplane. Configure your control plane by installing packages that extend its capabilities, like to create and manage the lifecycle of new types of infrastructure resources.
+
+You're encourage to install any available Crossplane package type (Providers, Configurations, Functions) available in the [Upbound Marketplace][upbound-marketplace] on your control planes.
+
+### Install packages
+
+Below are a couple ways to install Crossplane packages on your control plane.
+
+
+
+
+
+
+Use the `up` CLI to install Crossplane packages from the [Upbound Marketplace][upbound-marketplace-1] on your control planes. Connect directly to your control plane via `up ctx`. Then, to install a provider:
+
+```shell
+up ctp provider install xpkg.upbound.io/upbound/provider-family-aws
+```
+
+To install a Configuration:
+
+```shell
+up ctp configuration install xpkg.upbound.io/upbound/platform-ref-aws
+```
+
+To install a Function:
+
+```shell
+up ctp function install xpkg.upbound.io/crossplane-contrib/function-kcl
+```
+
+
+You can use kubectl to directly apply any Crossplane manifest. Below is an example for installing a Crossplane provider:
+
+```yaml
+cat <
+
+
+
+For production-grade scenarios, it's recommended you configure your control plane declaratively via Git plus a Continuous Delivery (CD) Engine such as Argo. guidance on this topic, read [GitOps with control planes][gitops-with-control-planes].
+
+
+
+
+
+
+### Configure Crossplane ProviderConfigs
+
+#### ProviderConfigs with OpenID Connect
+
+Use OpenID Connect (`OIDC`) to authenticate to Upbound control planes without credentials. OIDC lets your control plane exchange short-lived tokens directly with your cloud provider. Read how to [connect control planes to external services][connect-control-planes-to-external-services] to learn more.
+
+#### Generic ProviderConfigs
+
+The Upbound Console doesn't allow direct editing of ProviderConfigs that don't support `Upbound` authentication. To edit these ProviderConfigs on your control plane, connect to the control plane directly by following the instructions in the previous section and using `kubectl`.
+
+### Configure secrets
+
+Upbound gives users the ability to configure the synchronization of secrets from external stores into control planes. Configure this capability at the group-level, explained in the [Spaces documentation][spaces-documentation].
+
+### Configure backups
+
+Upbound gives users the ability to configure backup schedules, take impromptu backups, and conduct self-service restore operations. Configure this capability at the group-level, explained in the [Spaces documentation][spaces-documentation-1].
+
+### Configure telemetry
+
+
+Upbound gives users the ability to configure the collection of telemetry (logs, metrics, and traces) in their control planes. Using Upbound's built-in [OTEL][otel] support, you can stream this data out to your preferred observability solution. Configure this capability at the group-level, explained in the [Spaces documentation][spaces-documentation-2].
+
+
+
+[automatic-upgrades]: /spaces/howtos/auto-upgrade
+[release-notes]: https://github.com/upbound/universal-crossplane/releases
+[control-plane-group]: /spaces/concepts/groups
+[space]: /spaces/overview
+[up-cli]: /reference/cli-reference
+[cli-reference]: /reference/cli-reference
+[up-ctx]: /reference/cli-reference
+[up-ctx-f]: /reference/cli-reference
+[cli-documentation]: /manuals/cli/concepts/contexts
+[upbound-marketplace]: https://marketplace.upbound.io
+[upbound-marketplace-1]: https://marketplace.upbound.io
+[gitops-with-control-planes]: /spaces/howtos/cloud-spaces/gitops
+[connect-control-planes-to-external-services]: /manuals/platform/howtos/oidc
+[spaces-documentation]: /spaces/howtos/secrets-management
+[spaces-documentation-1]: /spaces/howtos/backup-and-restore
+[otel]: https://otel.com
+[spaces-documentation-2]: /spaces/howtos/observability
diff --git a/spaces_versioned_docs/version-1.14/concepts/deployment-modes.md b/spaces_versioned_docs/version-1.14/concepts/deployment-modes.md
new file mode 100644
index 000000000..f5e718f88
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/concepts/deployment-modes.md
@@ -0,0 +1,53 @@
+---
+title: Deployment Modes
+sidebar_position: 10
+description: An overview of deployment modes for Spaces
+---
+
+Upbound Spaces can be deployed and used in a variety of modes:
+
+- **Cloud Spaces:** Multi-tenant Upbound-hosted, Upbound-managed Space environment. Cloud Spaces provide a typical SaaS experience.
+- **[Dedicated Spaces][dedicated-spaces]:** Single-tenant Upbound-hosted, Upbound-managed Space environment. Dedicated Spaces provide a SaaS experience, with additional isolation guarantees that your workloads run in a fully isolated context.
+- **[Managed Spaces][managed-spaces]:** Single-tenant customer-hosted, Upbound-managed Space environment. Managed Spaces provide a SaaS-like experience, with additional guarantees of all hosting infrastructure being served from your own cloud account.
+- **[Self-Hosted Spaces][self-hosted-spaces]:** Single-tenant customer-hosted, customer-managed Space environment. This is a fully self-hosted, self-managed software experience for using Spaces. Upbound delivers the Spaces software and you run it yourself.
+
+The Upbound platform uses a federated model to connect each Space back to a
+central service called the [Upbound Console][console], which is deployed and
+managed by Upbound.
+
+By default, customers have access to a set of Cloud Spaces.
+
+## Supported clouds
+
+You can use host Upbound Spaces on Amazon Web Services (AWS), Microsoft Azure,
+and Google Cloud Platform (GCP). Regardless of the hosting platform, you can use
+Spaces to deploy control planes that manage the lifecycle of your resources.
+
+## Supported regions
+
+This table lists the cloud service provider regions supported by Upbound.
+
+### GCP
+
+| Region | Location |
+| --- | --- |
+| `us-west-1` | Western US (Oregon)
+| `us-central-1` | Central US (Iowa)
+| `eu-west-3` | Eastern Europe (Frankfurt)
+
+### AWS
+
+| Region | Location |
+| --- | --- |
+| `us-east-1` | Eastern US (Northern Virginia)
+
+### Azure
+
+| Region | Location |
+| --- | --- |
+| `us-east-1` | Eastern US (Iowa)
+
+[dedicated-spaces]: /spaces/howtos/cloud-spaces/dedicated-spaces-deployment
+[managed-spaces]: /spaces/howtos/self-hosted/managed-spaces-deployment
+[self-hosted-spaces]: /spaces/howtos/self-hosted/self-hosted-spaces-deployment
+[console]: /manuals/console/upbound-console/
diff --git a/spaces_versioned_docs/version-1.14/concepts/groups.md b/spaces_versioned_docs/version-1.14/concepts/groups.md
new file mode 100644
index 000000000..d2ccacdb3
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/concepts/groups.md
@@ -0,0 +1,115 @@
+---
+title: Control Plane Groups
+sidebar_position: 2
+description: An introduction to the Control Plane Groups in Upbound
+plan: "enterprise"
+---
+
+
+
+In Upbound, Control Plane Groups (or just, 'groups') are a logical grouping of one or more control planes with shared resources like [secrets][secrets] or [backups][backups]. It's a mechanism for isolating these groups of resources within a single [Space][space]. All role-based access control in Upbound happens at the control plane group-level.
+
+## When to use multiple groups
+
+You should use groups in environments where there's a need to have Crossplane manage infrastructure across multiple cloud accounts or projects. For users who only need to deploy and manage resources in a couple cloud accounts, you shouldn't need to think about groups at all.
+
+Groups are a way to divide access in Upbound between multiple teams. Think of a group as being analogous to a Kubernetes _namespace_.
+
+## The 'default' group
+
+Every Cloud Space in Upbound has a group named _default_ available.
+
+## Working with groups
+
+### View groups
+
+You can list groups in a Space using:
+
+```shell
+up group list
+```
+
+If you're operating in a single-tenant Space and have access to the underlying cluster, you can list namespaces that have the group label:
+
+```shell
+kubectl get namespaces -l spaces.upbound.io/group=true
+```
+
+### Set the group for a request
+
+Several commands in _up_ have a group context. To set the group for a request, use the `--group` flag:
+
+```shell
+up ctp list --group=team1
+```
+```shell
+up ctp create new-ctp --group=team2
+```
+
+### Set the group preference
+
+The _up_ CLI operates upon a single [Upbound context][upbound-context]. Whatever context gets set is then used as the preference for other commands. An Upbound context is capable of pointing at a variety of altitudes:
+
+1. A Space in Upbound
+2. A group within a Space
+3. a control plane within a group
+
+To set the group preference, use `up ctx` to choose a group as your preferred Upbound context. For example:
+
+```shell
+# This sets the context for the up CLI to the default group in an Upbound-managed Cloud Space (gcp-us-west-1) for an organization called 'acmeco'
+up ctx acmeco/upbound-gcp-us-west-1/default/
+```
+
+### Create a group
+
+To create a group, login to Upbound and set your context to your desired Space:
+
+```shell
+up login
+up ctx '/'
+# Example: up ctx acmeco/upbound-gcp-us-west-1
+```
+
+
+Create a group:
+
+```shell
+up group create my-new-group
+```
+
+### Delete a group
+
+To delete a group, login to Upbound and set your context to your desired Space:
+
+```shell
+up login
+up ctx '/'
+# Example: up ctx acmeco/upbound-gcp-us-west-1
+```
+
+Delete a group:
+
+```shell
+up group delete my-new-group
+```
+
+### Protected groups
+
+Once a control plane gets created in a group, Upbound enforces a protection policy on the group. Upbound prevents accidental deletion of the group. To delete a group that has control planes in it, you should first delete all control planes in the group.
+
+## Groups in the context of single-tenant Spaces
+
+Upbound offers a variety of deployment models to use the product. If you deploy your own single-tenant Upbound Space (whether connected or disconnected), you're self-hosting Upbound software in a Kubernetes cluster. In these environments, a control plane group maps to a corresponding namespace in the cluster which hosts the Space.
+
+Most Kubernetes clusters come with some set of predefined namespaces. Because a group maps to a corresponding Kubernetes namespace, whenever a group gets created, there too must be a Kubernetes namespace accordingly. When the Spaces software is newly installed, no groups exist. You _can_ elevate a Kubernetes namespace to become a group by doing the following:
+
+1. Creating a group with the same name as a preexisting Kubernetes namespace
+2. Creating a control plane in a preexisting Kubernetes namespace
+3. Labeling a Kubernetes namespace with the label `spaces.upbound.io/group=true`
+
+
+[secrets]: /spaces/howtos/secrets-management
+[backups]: /spaces/howtos/self-hosted/workload-id/backup-restore-config/
+[space]: /spaces/overview
+[upbound-context]: /manuals/cli/concepts/contexts
diff --git a/spaces_versioned_docs/version-1.14/howtos/_category_.json b/spaces_versioned_docs/version-1.14/howtos/_category_.json
new file mode 100644
index 000000000..d3a8547aa
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/_category_.json
@@ -0,0 +1,7 @@
+{
+ "label": "How-tos",
+ "position": 3,
+ "collapsed": true
+}
+
+
diff --git a/spaces_versioned_docs/version-1.14/howtos/api-connector.md b/spaces_versioned_docs/version-1.14/howtos/api-connector.md
new file mode 100644
index 000000000..4db30bac1
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/api-connector.md
@@ -0,0 +1,408 @@
+---
+title: API Connector
+weight: 90
+description: Connect Kubernetes clusters to remote Crossplane control planes for resource synchronization
+aliases:
+ - /api-connector
+ - /concepts/api-connector
+---
+
+:::warning
+API Connector is currently in **Preview**. The feature is under active
+development and subject to breaking changes. Use for testing and evaluation
+purposes only.
+:::
+
+API Connector enables seamless integration between Kubernetes application
+clusters consuming APIs and remote Crossplane control planes providing and
+reconciling APIs.
+
+You can use the API Connector to decouple where Crossplane is running (for
+example in an Upbound control plane), and where APIs are consumed
+(for example in an existing Kubernetes cluster). This gives you flexibility and
+consistency in your control plane operations.
+
+
+
+Unlike the [Control Plane Connector](ctp-connector.md) which offers only
+coarse-grained connectivity between app clusters and a control plane, API
+connector offers fine-grained configuration of which APIs get offered along with
+multi-cluster connectivity.
+
+## Architecture overview
+
+
+
+API Connector uses a **provider-consumer** model:
+
+- **Provider control plane**: The Upbound control plane that provides APIs and manages infrastructure.
+- **Consumer cluster**: Any Kubernetes cluster where its users wants to use APIs provided by the provider control plane, without having to run Crossplane. API connector gets installed in the consumer cluster, and bidirectionally syncs API objects to the provider.
+
+### Key components
+
+**Custom Resource Definitions (CRDs)**:
+
+
+- `ClusterConnection`: Establishes a connection from the consumer to the provider cluster. Pulls bindable CRD APIs from the provider into the consumer cluster for use.
+
+- `ClusterAPIBinding`: Instructs API connector to sync all API objects cluster-wide with a given API group to a given provider cluster.
+- `APIBinding`: Namespaced version of `ClusterAPIBinding`. Instructs API connector to sync API objects within a given namespace and with a given API group to a given provider cluster.
+
+
+## Prerequisites
+
+Before using API Connector, ensure:
+
+1. **Consumer cluster** has network access to the provider control plane
+1. You have an license to use API connector. If you are unsure, [contact Upbound][contact] or your sales representative.
+
+This guide walks through how to automate connecting your cluster to an Upbound
+control plane. You can also manually configure the API Connector.
+
+## Publishing APIs in the provider cluster
+
+
+
+
+First, log in to your provider control plane, and choose which CRD APIs you want
+to make accessible to the consumer cluster's. API connector only syncs
+these "bindable" CRDs.
+
+
+
+
+
+
+Use the `up` CLI to login:
+
+```bash
+up login
+```
+
+Connect to your control plane:
+
+```bash
+up ctx
+```
+
+Check what CRDs are available:
+
+```bash
+kubectl get crds
+```
+
+
+Label all CRDs you want to publish with the bindable label:
+
+
+```bash
+kubectl label crd 'connect.upbound.io/bindable'='true' --overwrite
+```
+
+
+
+
+Change context to the provider cluster:
+```bash
+kubectl config set-context
+```
+
+Check what CRDs are available:
+```bash
+kubectl get crds
+```
+
+
+Label all CRDs you want to publish with the bindable label
+
+```bash
+kubectl label crd 'connect.upbound.io/bindable'='true' --overwrite
+```
+
+
+
+## Installation
+
+
+
+
+The up CLI provides the simplest installation method with automatic
+configuration:
+
+Make sure the current Kubeconfig context is set to the **provider control plane**
+```bash
+up ctx
+
+up controlplane api-connector install --consumer-kubeconfig [OPTIONS]
+```
+
+The command:
+1. creates a Robot account (named ``) in the Upbound Cloud organization ``,
+1. Gives the created robot account `admin` permissions to the provider control plane ``
+1. Generates a JWT token for the robot account, and stores it in a Kubernetes Secret in the consumer cluster.
+1. Installs the API connector Helm chart in the consumer cluster.
+1. Creates a `ClusterConnection` object in the consumer cluster, referring to the newly generated Secret, so that API connector can authenticate successfully to the provider control plane.
+1. API connector pulls all published CRDs from the previous step into the consumer cluster.
+
+**Example**:
+```bash
+up controlplane api-connector install \
+ --consumer-kubeconfig ~/.kube/config \
+ --consumer-context my-cluster \
+ --upbound-token
+```
+
+This command uses provided token to authenticate with the **Provider control plane**
+and create a `ClusterConnection` resource in the **Consumer cluster** to connect to the
+**Provider control plane**.
+
+**Key Options**:
+- `--consumer-kubeconfig`: Path to consumer cluster kubeconfig (required)
+- `--consumer-context`: Context name for consumer cluster (required)
+- `--name`: Custom name for connection resources (optional)
+- `--upbound-token`: API token for authentication (optional)
+- `--upgrade`: Upgrade existing installation (optional)
+- `--version`: Specific version to install (optional)
+
+
+
+
+For manual installation or custom configurations:
+
+```bash
+helm upgrade --install api-connector oci://xpkg.upbound.io/spaces-artifacts/api-connector \
+ --namespace upbound-system \
+ --create-namespace \
+ --version \
+ --set consumerClusterDisplayName=
+```
+
+### Authentication methods
+
+API Connector supports two authentication methods:
+
+
+
+
+For Upbound Spaces integration:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: spaces-secret
+ namespace: upbound-system
+type: Opaque
+stringData:
+ token:
+ organization:
+ spacesBaseURL:
+ controlPlaneGroupName:
+ controlPlaneName:
+```
+
+
+
+For direct cluster access:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: provider-kubeconfig
+ namespace: upbound-system
+type: Opaque
+data:
+ kubeconfig:
+```
+
+
+
+
+### Connection setup
+
+Create a `ClusterConnection` to establish connectivity:
+
+
+
+
+```yaml
+apiVersion: connect.upbound.io/v1alpha1
+kind: ClusterConnection
+metadata:
+ name: spaces-connection
+ namespace: upbound-system
+spec:
+ secretRef:
+ kind: UpboundRobotToken
+ name: spaces-secret
+ namespace: upbound-system
+ crdManagement:
+ pullBehavior: Pull
+```
+
+
+
+
+```yaml
+apiVersion: connect.upbound.io/v1alpha1
+kind: ClusterConnection
+metadata:
+ name: provider-connection
+ namespace: upbound-system
+spec:
+ secretRef:
+ kind: KubeConfig
+ name: provider-kubeconfig
+ namespace: upbound-system
+ crdManagement:
+ pullBehavior: Pull
+```
+
+
+
+
+
+
+
+### Configuration
+
+Bind APIs to make them available in your consumer cluster:
+
+```yaml
+apiVersion: connect.upbound.io/v1alpha1
+kind: ClusterAPIBinding
+metadata:
+ name:
+spec:
+ connectionRef:
+ kind: ClusterConnection
+ name: # Or --name value
+```
+
+
+
+
+The `ClusterAPIBinding` name must match the **Resource.Group** (name of the CustomResourceDefinition) of the CRD you want to bind.
+
+
+
+
+## Usage example
+
+After configuration, you can create API objects (in the consumer cluster) that
+will be synchronized to the provider cluster:
+
+```yaml
+apiVersion: nop.example.org/v1alpha1
+kind: NopResource
+metadata:
+ name: my-resource
+ namespace: default
+spec:
+ coolField: "Synchronized resource"
+ compositeDeletePolicy: Foreground
+```
+
+Verify the resource status:
+
+```bash
+kubectl get nopresource my-resource -o yaml
+
+```
+When the `APIBound=True` condition is present, it means that the API object has
+been synced to the provider cluster, and is being reconciled there. Whenever the
+API object in the provider cluster gets status updates (for example
+`Ready=True`), that status is synced back to the consumer cluster.
+
+Switch contexts to the provider cluster to see the API object being created:
+
+```bash
+up ctx
+# or kubectl config set-context
+```
+
+```bash
+kubectl get nopresource my-resource -o yaml
+```
+
+Note that in the provider cluster, the API object is labeled with information on
+where the API object originates from, and `connect.upbound.io/managed=true`.
+
+## Monitoring and troubleshooting
+
+### Check connection status
+
+```bash
+kubectl get clusterconnection
+```
+
+Expected output:
+```
+NAME STATUS MESSAGE
+spaces-connection Ready Provider controlplane is available
+```
+
+### View available APIs
+
+```bash
+kubectl get clusterconnection spaces-connection -o jsonpath='{.status.offeredAPIs[*].name}'
+```
+
+### Check API binding status
+
+```bash
+kubectl get clusterapibinding
+```
+
+### Debug resource synchronization
+
+```bash
+kubectl describe
+```
+
+## Removal
+
+### Using the up CLI
+
+```bash
+up controlplane api-connector uninstall \
+ --consumer-kubeconfig ~/.kube/config \
+ --all
+```
+
+The `--all` flag removes all resources including connections and secrets.
+Without the flag, only runtime related resources won't be removed.
+
+:::note
+Uninstall doesn't remove any API objects in the provider control plane. If you
+want to clean up all API objects there, delete all API objects from the consumer
+cluster before API connector uninstallation, and wait for the objects to get
+deleted.
+:::
+
+
+### Using Helm
+
+```bash
+helm uninstall api-connector -n upbound-system
+```
+
+## Limitations
+
+- **Preview feature**: Subject to breaking changes. Not yet production grade.
+- **CRD updates**: CRDs are pulled once but not automatically updated. If multiple Crossplane clusters offer the same CRD API, API changes must be synchronized out of band, for example using a [Crossplane Configuration](https://docs.crossplane.io/latest/packages/).
+- **Network requirements**: Consumer cluster must have direct network access to provider cluster.
+- **Wide permissions needed in consumer cluster**: Because the API connector doesn't know up front the names of the APIs it needs to reconcile, it currently runs with full "root" privileges in the consumer cluster.
+
+- **Connector polling**: API Connector checks for drift between the consumer and provider cluster
+ periodically through polling. The poll interval can be changed with the `pollInterval` Helm value.
+
+
+## Advanced configuration
+
+### Multiple connections
+
+You can connect to multiple provider clusters simultaneously by creating multiple `ClusterConnection` resources with different names and configurations.
+
+[contact]: https://www.upbound.io/contact-us
diff --git a/spaces_versioned_docs/version-1.14/howtos/auto-upgrade.md b/spaces_versioned_docs/version-1.14/howtos/auto-upgrade.md
new file mode 100644
index 000000000..edc50e38d
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/auto-upgrade.md
@@ -0,0 +1,126 @@
+---
+title: Automatically upgrade control planes
+sidebar_position: 50
+description: How to configure automatic upgrades of Crossplane in a control plane
+plan: "standard"
+---
+
+
+
+Upbound Spaces can automatically upgrade the version of Upbound Crossplane in managed control plane instances. You can edit the `spec.crossplane.autoUpgrade` field in your `ControlPlane` specification with the available release channels below.
+
+
+| Channel | Description | Example |
+|------------|-----------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------|
+| **None** | Disables auto upgrades. | _Uses version specified in `spec.crossplane.version`._ |
+| **Patch** | Upgrades to the latest supported patch release. | _Control plane version 1.12.2-up.2 auto upgrades to 1.12.3-up.1 upon release._ |
+| **Stable** | Default setting. Upgrades to the latest supported patch release on minor version _N-1_ where N is the latest supported minor version. | _If latest supported minor version is 1.14, auto upgrades to latest patch - 1.13.2-up.3_ |
+| **Rapid** | Upgrades to the latest supported patch release on the latest supported minor version. | _If the latest supported minor version is 1.14, auto upgrades to the latest patch of minor version. 1.14 upgrades to 1.14.5-up.1_ |
+
+
+:::warning
+
+The `Rapid` channel is only recommended for users willing to accept the risk of new features and potentially breaking changes.
+
+:::
+
+## Examples
+
+The specs below are examples of how to edit the `autoUpgrade` channel in your `ControlPlane` specification.
+
+To run a control plane with the `Rapid` auto upgrade channel, your spec should look like this:
+
+```yaml
+apiVersion: spaces.upbound.io/v1beta1
+kind: ControlPlane
+metadata:
+ name: example-ctp
+spec:
+ crossplane:
+ autoUpgrade:
+ channel: Rapid
+ writeConnectionSecretToRef:
+ name: kubeconfig-example-ctp
+```
+
+To run a control plane with a pinned version of Crossplane, specify in the `version` field:
+
+```yaml
+apiVersion: spaces.upbound.io/v1beta1
+kind: ControlPlane
+metadata:
+ name: example-ctp
+spec:
+ crossplane:
+ version: 1.14.3-up.1
+ autoUpgrade:
+ channel: None
+ writeConnectionSecretToRef:
+ name: kubeconfig-example-ctp
+```
+
+## Supported Crossplane versions
+
+Spaces supports the three [preceding minor versions][preceding-minor-versions] from the last supported minor version. example, if the last supported minor version is `1.14`, minor versions `1.13` and `1.12` are also supported. Versions older than the three most recent minor versions aren't supported. Only supported Crossplane versions are valid specifications for new control planes.
+
+Current Crossplane version support by Spaces version:
+
+| Spaces Version | Crossplane Version Min | Crossplane Version Max |
+|:--------------:|:----------------------:|:----------------------:|
+| 1.2 | 1.13 | 1.15 |
+| 1.3 | 1.13 | 1.15 |
+| 1.4 | 1.14 | 1.16 |
+| 1.5 | 1.14 | 1.16 |
+| 1.6 | 1.14 | 1.16 |
+| 1.7 | 1.14 | 1.16 |
+| 1.8 | 1.15 | 1.17 |
+| 1.9 | 1.16 | 1.18 |
+| 1.10 | 1.16 | 1.18 |
+| 1.11 | 1.16 | 1.18 |
+| 1.12 | 1.17 | 1.19 |
+
+
+Upbound offers extended support for all installed Crossplane versions released within a 12 month window since the last Spaces release. Contact your Upbound sales representative for more information on version support.
+
+
+:::warning
+
+If the auto upgrade channel is `Stable` or `Rapid`, the Crossplane version will always stay within the support window after auto upgrade. If set to `Patch` or `None`, the minor version may be outside the support window. You are responsible for upgrading to a supported version
+
+:::
+
+To view the support status of a control plane instance, use `kubectl get ctp`.
+
+```bash
+kubectl get ctp
+NAME CROSSPLANE VERSION SUPPORTED READY MESSAGE AGE
+example-ctp 1.13.2-up.3 True True 31m
+
+```
+
+Unsupported versions return `SUPPORTED: False`.
+
+```bash
+kubectl get ctp
+NAME CROSSPLANE VERSION SUPPORTED READY MESSAGE AGE
+example-ctp 1.11.5-up.1 False True 31m
+
+```
+
+For more information, use the `-o yaml` flag to return more information.
+
+```bash
+kubectl get controlplanes.spaces.upbound.io example-ctp -o yaml
+status:
+conditions:
+...
+- lastTransitionTime: "2024-01-23T06:36:10Z"
+ message: Crossplane version 1.11.5-up.1 is outside of the support window.
+ Oldest supported minor version is 1.12.
+ reason: UnsupportedCrossplaneVersion
+ status: "False"
+ type: Supported
+```
+
+
+[preceding-minor-versions]: /reference/usage/lifecycle/#maintenance-and-updates
diff --git a/spaces_versioned_docs/version-1.14/howtos/automation-and-gitops/_category_.json b/spaces_versioned_docs/version-1.14/howtos/automation-and-gitops/_category_.json
new file mode 100644
index 000000000..b65481af6
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/automation-and-gitops/_category_.json
@@ -0,0 +1,8 @@
+{
+ "label": "Automation & GitOps",
+ "position": 11,
+ "collapsed": true,
+ "customProps": {
+ "plan": "business"
+ }
+}
diff --git a/spaces_versioned_docs/version-1.14/howtos/automation-and-gitops/overview.md b/spaces_versioned_docs/version-1.14/howtos/automation-and-gitops/overview.md
new file mode 100644
index 000000000..7af47c032
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/automation-and-gitops/overview.md
@@ -0,0 +1,133 @@
+---
+title: Automation and GitOps Overview
+sidebar_label: Overview
+sidebar_position: 1
+description: Guide to automating control plane deployments with GitOps and Argo CD
+plan: "business"
+---
+
+Automating control plane deployments with GitOps enables declarative, version-controlled infrastructure management. This section covers integrating GitOps workflows with Upbound control planes using Argo CD and related tools.
+
+
+## What is GitOps?
+
+GitOps is an approach for managing infrastructure by:
+- **Declaratively describing** desired system state in Git
+- **Using controllers** to continuously reconcile actual state with desired state
+- **Treating Git as the source of truth** for all configuration and deployments
+
+Upbound control planes are fully compatible with GitOps patterns and we strongly recommend integrating GitOps in the platforms you build on Upbound.
+
+## Key Concepts
+
+### Argo CD
+[Argo CD](https://argo-cd.readthedocs.io/) is a popular Kubernetes-native GitOps controller. It continuously monitors Git repositories and automatically applies changes to your infrastructure when commits are detected.
+
+### Deployment Models
+
+The way you configure GitOps depends on your deployment model:
+
+| Aspect | Cloud Spaces | Self-Hosted Spaces |
+|--------|--------------|-------------------|
+| **Access Method** | Upbound API with tokens | Kubernetes native (secrets/kubeconfig) |
+| **Configuration** | Kubeconfig via `up` CLI | Control plane connection secrets |
+| **Setup Complexity** | More involved (API integration) | Simpler (native Kubernetes) |
+| **Typical Use Case** | Managing Upbound resources | Managing workloads on control planes |
+
+## Getting Started
+
+**Choose your path based on your deployment model:**
+
+###. Cloud Spaces
+If you're using Upbound Cloud Spaces (Dedicated or Managed):
+1. Start with [GitOps with Upbound Control Planes](../cloud-spaces/gitops-on-upbound.md)
+2. Learn how to integrate Argo CD with Cloud Spaces
+3. Manage both control plane infrastructure and Upbound resources declaratively
+
+###. Self-Hosted Spaces
+If you're running self-hosted Spaces:
+1. Start with [GitOps with ArgoCD in Self-Hosted Spaces](../self-hosted/gitops-with-argocd.md)
+2. Learn how to configure control plane connection secrets
+3. Manage workloads deployed to your control planes
+
+## Common Workflows
+
+### Workflow 1: Managing Control Planes with GitOps
+Create and manage control planes themselves declaratively using provider-kubernetes:
+
+```yaml
+apiVersion: kubernetes.crossplane.io/v1alpha2
+kind: Object
+metadata:
+ name: my-controlplane
+spec:
+ forProvider:
+ manifest:
+ apiVersion: spaces.upbound.io/v1beta1
+ kind: ControlPlane
+ # ... control plane configuration
+```
+
+### Workflow 2: Managing Workloads on Control Planes
+Deploy applications and resources to control planes using standard Kubernetes GitOps patterns:
+
+```yaml
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: my-app
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: my-app
+ namespace: my-app
+# ... deployment configuration
+```
+
+### Workflow 3: Managing Upbound Resources
+Use provider-upbound to manage Upbound IAM and repository resources:
+
+- Teams
+- Robots and their team memberships
+- Repositories and permissions
+
+## Advanced Topics
+
+### Argo CD Plugin for Upbound
+Learn more in the [ArgoCD Plugin guide](../self-hosted/use-argo.md) for enhanced integration with self-hosted Spaces.
+
+### Declarative Control Plane Creation
+See [Declaratively create control planes](../self-hosted/declarative-ctps.md) for advanced automation patterns.
+
+### Consuming Control Plane APIs
+Understand how to [consume control plane APIs in your app cluster](../mcp-connector-guide.md) with Argo CD.
+
+## Prerequisites
+
+Before implementing GitOps with control planes, ensure you have:
+
+**For Cloud Spaces:**
+- Access to Upbound Cloud Spaces
+- `up` CLI installed and configured
+- API token with appropriate permissions
+- Argo CD or similar GitOps controller running
+- Familiarity with Kubernetes RBAC
+
+**For Self-Hosted Spaces:**
+- Self-hosted Spaces deployed and running
+- Argo CD deployed in your infrastructure
+- Kubectl access to the cluster hosting Spaces
+- Understanding of control plane architecture
+
+## Next Steps
+
+1. **Choose your deployment model** above
+2. **Review the relevant getting started guide**
+3. **Set up your GitOps controller** (Argo CD)
+4. **Deploy your first automated control plane**
+5. **Explore advanced topics** as needed
+
+:::tip
+Start with simple deployments to test your GitOps workflow before moving to production. Use [simulations](../simulations.md) to preview changes before applying them.
+:::
diff --git a/spaces_versioned_docs/version-1.14/howtos/backup-and-restore.md b/spaces_versioned_docs/version-1.14/howtos/backup-and-restore.md
new file mode 100644
index 000000000..e434552ea
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/backup-and-restore.md
@@ -0,0 +1,516 @@
+---
+title: Backup and restore
+sidebar_position: 13
+description: Configure and manage backups in your Upbound Space.
+plan: "enterprise"
+---
+
+
+
+Upbound's _Shared Backups_ is a built-in backup and restore feature. Shared Backups lets you configure automatic schedules for taking snapshots of your control planes. You can restore data from these backups by making new control planes. This guide explains how to use Shared Backups for disaster recovery or upgrade scenarios.
+
+
+## Benefits
+
+The Shared Backups feature provides the following benefits:
+
+* Automatic backups for control planes without any operational overhead
+* Backup schedules for multiple control planes in a group
+* Shared Backups are available across all hosting environments of Upbound (Disconnected, Connected or Cloud Spaces)
+
+
+## Configure a Shared Backup Config
+
+
+[SharedBackupConfig][sharedbackupconfig] is a [group-scoped][group-scoped] resource. You should create them in a group containing one or more control planes. This resource configures the storage details and provider. Whenever a backup executes (either by schedule or manually initiated), it references a SharedBackupConfig to tell it where store the snapshot.
+
+
+### Backup config provider
+
+
+The `spec.objectStorage.provider` and `spec.objectStorage.config` fields configures:
+
+* The object storage provider
+* The path to the provider
+* The credentials needed to communicate with the provider
+
+You can only set one provider. Upbound currently supports AWS, Azure, and GCP as providers.
+
+
+`spec.objectStorage.config` is a freeform map of configuration options for the object storage provider. See [Thanos object storage][thanos-object-storage] for more information on the formats for each supported cloud provider. `spec.bucket` and `spec.provider` overrides the required values in the config.
+
+
+
+#### AWS as a storage provider
+
+:::important
+For Cloud Spaces, static credentials are currently the only supported auth method.
+:::
+
+This example demonstrates how to use AWS as a storage provider for your backups:
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackupConfig
+metadata:
+ name: default
+ namespace: default
+spec:
+ objectStorage:
+ provider: AWS
+ bucket: spaces-backup-bucket
+ config:
+ endpoint: s3.eu-west-2.amazonaws.com
+ region: eu-west-2
+ credentials:
+ source: Secret
+ secretRef:
+ name: bucket-creds
+ key: creds
+```
+
+
+This example assumes you've already created an S3 bucket called "spaces-backup-bucket" in AWS `eu-west-2` region. The account credentials to access the bucket should exist in a secret of the same namespace as the Shared Backup Config.
+
+#### Azure as a storage provider
+
+:::important
+For Cloud Spaces, static credentials are currently the only supported auth method.
+:::
+
+This example demonstrates how to use Azure as a storage provider for your backups:
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackupConfig
+metadata:
+ name: default
+ namespace: default
+spec:
+ objectStorage:
+ provider: Azure
+ bucket: upbound-backups
+ config:
+ storage_account: upbackupstore
+ container: upbound-backups
+ endpoint: blob.core.windows.net
+ credentials:
+ source: Secret
+ secretRef:
+ name: bucket-creds
+ key: creds
+```
+
+
+This example assumes you've already created an Azure storage account called `upbackupstore` and blob `upbound-backups`. The storage account key to access the blob should exist in a secret of the same namespace as the Shared Backup Config.
+
+
+#### GCP as a storage provider
+
+:::important
+For Cloud Spaces, static credentials are currently the only supported auth method.
+:::
+
+This example demonstrates how to use Google Cloud Storage as a storage provider for your backups:
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackupConfig
+metadata:
+ name: default
+ namespace: default
+spec:
+ objectStorage:
+ provider: GCP
+ bucket: spaces-backup-bucket
+ credentials:
+ source: Secret
+ secretRef:
+ name: bucket-creds
+ key: creds
+```
+
+
+This example assumes you've already created a Cloud bucket called "spaces-backup-bucket" and a service account with access to this bucket. The key file should exist in a secret of the same namespace as the Shared Backup Config.
+
+
+## Configure a Shared Backup Schedule
+
+
+[SharedBackupSchedule][sharedbackupschedule] is a [group-scoped][group-scoped-1] resource. You should create them in a group containing one or more control planes. This resource defines a backup schedule for control planes within its corresponding group.
+
+Below is an example of a Shared Backup Schedule that takes backups every day of all control planes having `environment: production` labels:
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackupSchedule
+metadata:
+ name: daily-schedule
+ namespace: default
+spec:
+ schedule: "@daily"
+ configRef:
+ kind: SharedBackupConfig
+ name: default
+ controlPlaneSelector:
+ labelSelectors:
+ - matchLabels:
+ environment: production
+```
+
+### Define a schedule
+
+The `spec.schedule` field is a [Cron-formatted][cron-formatted] string. Some common examples are below:
+
+
+| Entry | Description |
+| ----------------- | ------------------------------------------------------------------------------------------------- |
+| `@hourly` | Run once an hour. |
+| `@daily` | Run once a day. |
+| `@weekly` | Run once a week. |
+| `0 0/4 * * *` | Run every 4 hours. |
+| `0/15 * * * 1-5` | Run every fifteenth minute on Monday through Friday. |
+| `@every 1h30m10s` | Run every 1 hour, 30 minutes, and 10 seconds. Hour is the largest measurement of time for @every. |
+
+
+### Exclude resources from the backup
+
+The `spec.excludedResources` field is an array of resource names to exclude from each backup.
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackupSchedule
+metadata:
+ name: daily-schedule
+spec:
+ excludedResources:
+ - "xclusters.aws.platformref.upbound.io"
+ - "xdatabase.aws.platformref.upbound.io"
+ - "xrolepolicyattachment.iam.aws.crossplane.io"
+```
+
+:::warning
+You must specify resource names in lowercase "resource.group" format (for example, `xclusters.aws.platformref.upbound.io`). Using only the resource kind (for example, `XCluster`) isn't supported.
+:::
+
+### Suspend a schedule
+
+Use `spec.suspend` field to suspend the schedule. It creates no new backups, but allows running backups to complete.
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackupSchedule
+metadata:
+ name: daily-schedule
+spec:
+ suspend: true
+```
+
+### Set the time to live
+
+Set the `spec.ttl` field to define the time to live for the backup. After this time, the backup is eligible for garbage collection. If this field isn't set, the backup isn't garbage collected. The time to live is a duration, for example, `168h` for 7 days.
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackupSchedule
+metadata:
+ name: daily-schedule
+spec:
+ ttl: 168h # Backup is garbage collected after 7 days
+```
+:::tip
+By default, this setting doesn't delete uploaded files. Review the next section to define
+the deletion policy.
+:::
+
+### Define the deletion policy
+
+Set the `spec.deletionPolicy` to define backup deletion actions, including the
+deletion of the backup file from the bucket. The Deletion Policy value defaults
+to `Orphan`. Set it to `Delete` to remove uploaded files in the bucket. more
+information on the backup and restore process, review the [Spaces API
+documentation][spaces-api-documentation].
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackupSchedule
+metadata:
+ name: daily-schedule
+spec:
+ ttl: 168h # Backup is garbage collected after 7 days
+ deletionPolicy: Delete # Defaults to Orphan
+```
+
+### Garbage collect backups when the schedule gets deleted
+
+Set the `spec.useOwnerReferencesInBackup` to garbage collect associated backups when a shared schedule gets deleted. If set to true, backups are garbage collected when the schedule gets deleted.
+
+### Control plane selection
+
+To configure which control planes in a group you want to create a backup schedule for, use the `spec.controlPlaneSelector` field. You can either use `labelSelectors` or the `names` of a control plane directly. A control plane matches if any of the label selectors match.
+
+This example matches all control planes in the group that have `environment: production` as a label:
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackupSchedule
+metadata:
+ name: my-backup-schedule
+spec:
+ controlPlaneSelector:
+ labelSelectors:
+ - matchLabels:
+ environment: production
+```
+
+You can use the more complex `matchExpressions` to match labels based on an expression. This example matches control planes that have label `environment: production` or `environment: staging`:
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackupSchedule
+metadata:
+ name: my-backup-schedule
+spec:
+ controlPlaneSelector:
+ labelSelectors:
+ - matchExpressions:
+ - { key: environment, operator: In, values: [production,staging] }
+```
+
+You can also specify the names of control planes directly:
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackupSchedule
+metadata:
+ name: my-backup-schedule
+spec:
+ controlPlaneSelector:
+ names:
+ - controlplane-dev
+ - controlplane-staging
+ - controlplane-prod
+```
+
+
+## Configure a Shared Backup
+
+
+
+[SharedBackup][sharedbackup] is a [group-scoped][group-scoped-2] resource. You should create them in a group containing one or more control planes. This resource causes a backups to occur for control planes within its corresponding group.
+
+Below is an example of a Shared Backup that takes a backup of all control planes having `environment: production` labels:
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackup
+metadata:
+ name: my-backup
+ namespace: default
+spec:
+ configRef:
+ kind: SharedBackupConfig
+ name: default
+ controlPlaneSelector:
+ labelSelectors:
+ - matchLabels:
+ environment: production
+```
+
+### Exclude resources from the backup
+
+The `spec.excludedResources` field is an array of resource names to exclude from each backup.
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackup
+metadata:
+ name: my-backup
+spec:
+ excludedResources:
+ - "xclusters.aws.platformref.upbound.io"
+ - "xdatabase.aws.platformref.upbound.io"
+ - "xrolepolicyattachment.iam.aws.crossplane.io"
+```
+
+:::warning
+You must specify resource names in lowercase "resource.group" format (for example, `xclusters.aws.platformref.upbound.io`). Using only the resource kind (for example, `XCluster`) isn't supported.
+:::
+
+### Set the time to live
+
+Set the `spec.ttl` field to define the time to live for the backup. After this time, the backup is eligible for garbage collection. If this field isn't set, the backup isn't garbage collected. The time to live is a duration, for example, `168h` for 7 days.
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackup
+metadata:
+ name: my-backup
+spec:
+ ttl: 168h # Backup is garbage collected after 7 days
+```
+
+
+### Garbage collect backups on Shared Backup deletion
+
+
+
+Set the `spec.useOwnerReferencesInBackup` to define whether to garbage collect associated backups when a shared backup gets deleted. If set to true, backups are garbage collected when the shared backup gets deleted.
+
+### Control plane selection
+
+To configure which control planes in a group you want to create a backup for, use the `spec.controlPlaneSelector` field. You can either use `labelSelectors` or the `names` of a control plane directly. A control plane matches if any of the label selectors match.
+
+This example matches all control planes in the group that have `environment: production` as a label:
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackup
+metadata:
+ name: my-backup
+spec:
+ controlPlaneSelector:
+ labelSelectors:
+ - matchLabels:
+ environment: production
+```
+
+You can use the more complex `matchExpressions` to match labels based on an expression. This example matches control planes that have label `environment: production` or `environment: staging`:
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackup
+metadata:
+ name: my-backup
+spec:
+ controlPlaneSelector:
+ labelSelectors:
+ - matchExpressions:
+ - { key: environment, operator: In, values: [production,staging] }
+```
+
+You can also specify the names of control planes directly:
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedBackup
+metadata:
+ name: my-backup
+spec:
+ controlPlaneSelector:
+ names:
+ - controlplane-dev
+ - controlplane-staging
+ - controlplane-prod
+```
+
+## Create a manual backup
+
+[Backup][backup] is a [group-scoped][group-scoped-3] resource that causes a single backup to occur for a control planes in its corresponding group.
+
+Below is an example of a manual Backup of a control plane:
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: Backup
+metadata:
+ name: my-backup
+ namespace: default
+spec:
+ configRef:
+ kind: SharedBackupConfig
+ name: default
+ controlPlane: my-awesome-ctp
+ deletionPolicy: Delete
+```
+
+The backup specification `DeletionPolicy` defines backup deletion actions,
+including the deletion of the backup file from the bucket. The `Deletion Policy`
+value defaults to `Orphan`. Set it to `Delete` to remove uploaded files
+in the bucket.
+For more information on the backup and restore process, review the [Spaces API documentation][spaces-api-documentation-1].
+
+
+### Choose a control plane to backup
+
+The `spec.controlPlane` field defines which control plane to execute a backup against.
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: Backup
+metadata:
+ name: my-backup
+ namespace: default
+spec:
+ controlPlane: my-awesome-ctp
+```
+
+If the control plane doesn't exist, the backup fails after multiple failed retry attempts.
+
+### Exclude resources from the backup
+
+The `spec.excludedResources` field is an array of resource names to exclude from the manual backup.
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: Backup
+metadata:
+ name: my-backup
+spec:
+ excludedResources:
+ - "xclusters.aws.platformref.upbound.io"
+ - "xdatabase.aws.platformref.upbound.io"
+ - "xrolepolicyattachment.iam.aws.crossplane.io"
+```
+
+:::warning
+You must specify resource names in lowercase "resource.group" format (for example, `xclusters.aws.platformref.upbound.io`). Using only the resource kind (for example, `XCluster`) isn't supported.
+:::
+
+### Set the time to live
+
+Set the `spec.ttl` field to define the time to live for the backup. After this time, the backup is eligible for garbage collection. If this field isn't set, the backup isn't garbage collected. The time to live is a duration, for example, `168h` for 7 days.
+
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: Backup
+metadata:
+ name: my-backup
+spec:
+ ttl: 168h # Backup is garbage collected after 7 days
+```
+
+## Restore a control plane from a backup
+
+You can restore a control plane's state from a backup. Below is an example of creating a new control plane from a previous backup called `restore-me`:
+
+
+```yaml
+apiVersion: spaces.upbound.io/v1beta1
+kind: ControlPlane
+metadata:
+ name: my-awesome-restored-ctp
+ namespace: default
+spec:
+ restore:
+ source:
+ kind: Backup
+ name: restore-me
+```
+
+
+[group-scoped]: /spaces/concepts/groups
+[group-scoped-1]: /spaces/concepts/groups
+[group-scoped-2]: /spaces/concepts/groups
+[group-scoped-3]: /spaces/concepts/groups
+[sharedbackupconfig]: /reference/apis/spaces-api/latest
+[thanos-object-storage]: https://thanos.io/tip/thanos/storage.md/
+[sharedbackupschedule]: /reference/apis/spaces-api/latest
+[cron-formatted]: https://en.wikipedia.org/wiki/Cron
+[spaces-api-documentation]: /reference/apis/spaces-api/v1_9
+[sharedbackup]: /reference/apis/spaces-api/latest
+[backup]: /reference/apis/spaces-api/latest
+[spaces-api-documentation-1]: /reference/apis/spaces-api/v1_9
+
+
+
diff --git a/spaces_versioned_docs/version-1.14/howtos/cloud-spaces/_category_.json b/spaces_versioned_docs/version-1.14/howtos/cloud-spaces/_category_.json
new file mode 100644
index 000000000..1e1869a38
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/cloud-spaces/_category_.json
@@ -0,0 +1,10 @@
+{
+ "label": "Cloud Spaces",
+ "position": 1,
+ "collapsed": true,
+ "customProps": {
+ "plan": "standard"
+ }
+}
+
+
diff --git a/spaces_versioned_docs/version-1.14/howtos/cloud-spaces/dedicated-spaces-deployment.md b/spaces_versioned_docs/version-1.14/howtos/cloud-spaces/dedicated-spaces-deployment.md
new file mode 100644
index 000000000..ebad9493e
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/cloud-spaces/dedicated-spaces-deployment.md
@@ -0,0 +1,33 @@
+---
+title: Dedicated Spaces
+sidebar_position: 4
+description: A guide to Upbound Dedicated Spaces
+plan: business
+---
+
+
+## Benefits
+
+Dedicated Spaces offer the following benefits:
+
+- **Single-tenancy** A control plane space where Upbound guarantees you're the only tenant operating in the environment.
+- **Connectivity to your private network** Establish secure network connections between your Dedicated Cloud Space running in Upbound and your own resources behind your private network.
+- **Reduced Overhead.** Offload day-to-day operational burdens to Upbound while focusing on your job of building your platform.
+
+## Architecture
+
+A Dedicated Space is a deployment of the Upbound Spaces software inside an
+Upbound-controlled cloud account and network. The control planes you run.
+
+The diagram below illustrates the high-level architecture of Upbound Dedicated Spaces:
+
+
+
+## How to get access to Dedicated Spaces
+
+If you have an interest in Upbound Dedicated Spaces, contact
+[Upbound][contact-us]. We can chat more about your
+requirements and see if Dedicated Spaces are a good fit for you.
+
+[contact-us]: https://www.upbound.io/contact-us
+[managed-space]: /spaces/howtos/self-hosted/managed-spaces-deployment
diff --git a/spaces_versioned_docs/version-1.14/howtos/cloud-spaces/gitops-on-upbound.md b/spaces_versioned_docs/version-1.14/howtos/cloud-spaces/gitops-on-upbound.md
new file mode 100644
index 000000000..fa59a8dce
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/cloud-spaces/gitops-on-upbound.md
@@ -0,0 +1,318 @@
+---
+title: GitOps with Upbound Control Planes
+sidebar_position: 80
+description: An introduction to doing GitOps with control planes on Upbound Cloud Spaces
+tier: "business"
+---
+
+:::info Deployment Model
+This guide applies to **Upbound Cloud Spaces** (Dedicated and Managed Spaces). For self-hosted Spaces deployments, see [GitOps with ArgoCD in Self-Hosted Spaces](/spaces/howtos/self-hosted/gitops-with-argocd/).
+:::
+
+GitOps is an approach for managing a system by declaratively describing desired resources' configurations in Git and using controllers to realize the desired state. Upbound's control planes are compatible with this pattern and it's strongly recommended you integrate GitOps in the platforms you build on Upbound.
+
+
+## Integrate with Argo CD
+
+
+[Argo CD][argo-cd] is a project in the Kubernetes ecosystem commonly used for GitOps. You can use it in tandem with Upbound control planes to achieve GitOps flows. The sections below explain how to integrate these tools with Upbound.
+
+### Generate a kubeconfig for your control plane
+
+Use the up CLI to [generate a kubeconfig][generate-a-kubeconfig] for your control plane.
+
+```bash
+up ctx /// -f - > context.yaml
+```
+
+### Create an API token
+
+
+You need a personal access token (PAT). You create PATs on a per-user basis in the Upbound Console. Go to [My Account - API tokens][my-account-api-tokens] and select Create New Token. Give the token a name and save the secret value to somewhere safe.
+
+
+### Add the up CLI init container to Argo
+
+Create a new file called `up-plugin-values.yaml` and paste the following YAML:
+
+```yaml
+controller:
+ volumes:
+ - name: up-plugin
+ emptyDir: {}
+ - name: up-home
+ emptyDir: {}
+
+ volumeMounts:
+ - name: up-plugin
+ mountPath: /usr/local/bin/up
+ subPath: up
+ - name: up-home
+ mountPath: /home/argocd/.up
+
+ initContainers:
+ - name: up-plugin
+ image: xpkg.upbound.io/upbound/up-cli:v0.39.0
+ command: ["cp"]
+ args:
+ - /usr/local/bin/up
+ - /plugin/up
+ volumeMounts:
+ - name: up-plugin
+ mountPath: /plugin
+
+server:
+ volumes:
+ - name: up-plugin
+ emptyDir: {}
+ - name: up-home
+ emptyDir: {}
+
+ volumeMounts:
+ - name: up-plugin
+ mountPath: /usr/local/bin/up
+ subPath: up
+ - name: up-home
+ mountPath: /home/argocd/.up
+
+ initContainers:
+ - name: up-plugin
+ image: xpkg.upbound.io/upbound/up-cli:v0.39.0
+ command: ["cp"]
+ args:
+ - /usr/local/bin/up
+ - /plugin/up
+ volumeMounts:
+ - name: up-plugin
+ mountPath: /plugin
+```
+
+### Install or upgrade Argo using the values file
+
+Install or upgrade Argo via Helm, including the values from the `up-plugin-values.yaml` file:
+
+```bash
+helm upgrade --install -n argocd -f up-plugin-values.yaml --reuse-values argocd argo/argo-cd
+```
+
+
+### Configure Argo CD
+
+
+To configure Argo CD for Annotation resource tracking, edit the Argo CD ConfigMap in the Argo CD namespace.
+Add `application.resourceTrackingMethod: annotation` to the data section as below.
+This configuration turns off Argo CD auto pruning, preventing the deletion of Crossplane resources.
+
+Next, configure the [auto respect RBAC for the Argo CD controller][auto-respect-rbac-for-the-argo-cd-controller].
+By default, Argo CD attempts to discover some Kubernetes resource types that don't exist in a control plane.
+You must configure Argo CD to respect the cluster's RBAC rules so that Argo CD can sync.
+Add `resource.respectRBAC: normal` to the data section as below.
+
+```bash
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: argocd-cm
+data:
+ ...
+ application.resourceTrackingMethod: annotation
+ resource.respectRBAC: normal
+```
+
+:::tip
+The `resource.respectRBAC` configuration above tells Argo to respect RBAC for _all_ cluster contexts. If you're using an Argo CD instance to manage more than only control planes, you should consider changing the `clusters` string match for the configuration to apply only to control planes. For example, if every control plane context name followed the convention of being named `controlplane-`, you could set the string match to be `controlplane-*`
+:::
+
+
+### Create a cluster context definition
+
+
+Replace the variables and run the following script to configure a new Argo cluster context definition.
+
+To configure Argo for a control plane in a Connected Space, replace `stringData.server` with the ingress URL of the control plane. This URL is what's outputted when using `up ctx`.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: my-control-plane
+ namespace: argocd
+ labels:
+ argocd.argoproj.io/secret-type: cluster
+type: Opaque
+stringData:
+ name: my-control-plane-context
+ server: https://.spaces.upbound.io/apis/spaces.upbound.io/v1beta1/namespaces//controlplanes//k8s
+ config: |
+ {
+ "execProviderConfig": {
+ "apiVersion": "client.authentication.k8s.io/v1",
+ "command": "up",
+ "args": [ "org", "token" ],
+ "env": {
+ "ORGANIZATION": "",
+ "UP_TOKEN": ""
+ }
+ },
+ "tlsClientConfig": {
+ "insecure": false,
+ "caData": ""
+ }
+ }
+```
+
+
+## GitOps for Upbound resources
+
+
+Like any other cloud service, you can drive the lifecycle of Upbound Cloud resources with Crossplane. This lets you establish GitOps flows to declaratively create and manage:
+
+- [control plane groups][control-plane-groups]
+- [control planes][control-planes]
+- [Upbound IAM resources][upbound-iam-resources]
+
+Use a control plane installed with [provider-upbound][provider-upbound] and [provider-kubernetes][provider-kubernetes] to achieve this.
+
+### Provider-upbound
+
+[Provider-upbound][provider-upbound-2] is a Crossplane provider built by Upbound to interact with Upbound resources. use _provider-upbound_ to declaratively create and manage the lifecycle of IAM resources and repositories:
+
+- [Robots][robots] and their membership to teams
+- [Teams][teams]
+- [Repositories][repositories] and [permissions][permissions] on those repositories.
+
+:::tip
+This provider defines managed resources for control planes, their auth, and permissions. These resources only applicable for customers who run in Upbound's **Legacy Spaces** control plane hosting environments. Customers should use provider-kubernetes explained below to manage the lifecycle of control planes with Crossplane.
+:::
+
+### Provider-kubernetes
+
+[Provider-kubernetes][provider-kubernetes-3] is a Crossplane provider that defines an [Object][object] resource. Use _Objects_ as general-purpose resources to wrap _any_ Kubernetes resource for Crossplane to manage.
+
+Upbound [Space APIs][space-apis] are Kube-like APIs and have implemented support for most Kubernetes-style API concepts. You can use kubectl or any other Kubernetes-compatible tooling to interact with the API. This means you can use _provider-kubernetes_ to drive interactions with Space APIs.
+
+:::warning
+When interacting with a Cloud Space's API, the Kubernetes [watch][watch] feature **isn't implemented.** Argo CD requires _watch_ support to function as expected, meaning you can't point Argo directly at a Cloud Space until it's implemented.
+:::
+
+Use _provider-kubernetes_ to declaratively drive interactions with all [Space APIs][space-apis-1]. Wrap the desired API resource in an _Object_. See the example below for a control plane:
+
+```yaml
+apiVersion: kubernetes.crossplane.io/v1alpha2
+kind: Object
+metadata:
+ name: my-controlplane
+spec:
+ forProvider:
+ manifest:
+ apiVersion: spaces.upbound.io/v1beta1
+ kind: ControlPlane
+ metadata:
+ name: my-controlplane
+ namespace: default
+ spec:
+ crossplane:
+ autoUpgrade:
+ channel: Rapid
+```
+
+[Control plane groups][control-plane-groups-2] are a special case because they technically map to an underlying Kubernetes namespace. You should create a `kind: namespace` with the `spaces.upbound.io/group` label to create a control plane group in a Space. See the example below:
+
+```yaml
+apiVersion: kubernetes.crossplane.io/v1alpha2
+kind: Object
+metadata:
+ name: group1
+spec:
+ forProvider:
+ manifest:
+ apiVersion: v1
+ kind: Namespace
+ metadata:
+ name: group1
+ labels:
+ spaces.upbound.io/group: "true"
+ spec: {}
+```
+
+### Configure auth for provider-kubernetes
+
+Like any other Crossplane provider, _provider-kubernetes_ requires a valid [ProviderConfig][providerconfig] to authenticate with Upbound before interacting with its APIs. Follow the steps below to configure auth for a ProviderConfig on a control plane that you want to use to interact with Upbound resources.
+
+1. Define an environment variable for the name of your Upbound org account. Use `up org list` to retrieve this value.
+```ini
+export UPBOUND_ACCOUNT=""
+```
+
+2. Create a [personal access token][personal-access-token] and store it as an environment variable.
+```shell
+export UPBOUND_TOKEN=""
+```
+
+3. Log on to Upbound.
+```shell
+up login
+```
+
+4. Create a kubeconfig for the desired Cloud Space instance you want to interact with.
+```shell
+export CONTROLPLANE_CONFIG=/tmp/controlplane-kubeconfig
+KUBECONFIG=$CONTROLPLANE_CONFIG up ctx $UPBOUND_ACCOUNT/upbound-gcp-us-west-1 # Replace this path with whichever Cloud Space you want to communicate with.
+```
+
+5. On the control plane you want to use to interact with Upbound resources, create a secret containing the credentials:
+```shell
+kubectl -n crossplane-system create secret generic cluster-config --from-file=kubeconfig=$CONTROLPLANE_CONFIG
+kubectl -n crossplane-system create secret generic upbound-credentials --from-literal=token=$UPBOUND_TOKEN
+```
+
+6. Create a ProviderConfig that references the credentials created in the prior step. Create this resource in your control plane:
+```yaml
+apiVersion: kubernetes.crossplane.io/v1alpha1
+kind: ProviderConfig
+metadata:
+ name: default
+spec:
+ credentials:
+ source: Secret
+ secretRef:
+ namespace: crossplane-system
+ name: cluster-config
+ key: kubeconfig
+ identity:
+ type: UpboundTokens
+ source: Secret
+ secretRef:
+ name: upbound-credentials
+ namespace: crossplane-system
+ key: token
+```
+
+You can now create _Objects_ in the control plane which wrap Space APIs.
+
+[generate-a-kubeconfig]: /manuals/cli/concepts/contexts
+[control-plane-groups]: /spaces/concepts/groups
+[control-planes]: /spaces/concepts/control-planes
+[upbound-iam-resources]: /manuals/platform/concepts/identity-management
+[space-apis]: /reference/apis/spaces-api/v1_9
+[space-apis-1]: /reference/apis/spaces-api/v1_9
+[control-plane-groups-2]: /spaces/concepts/groups
+
+
+[argo-cd]: https://argo-cd.readthedocs.io/en/stable/
+[my-account-api-tokens]: https://accounts.upbound.io/settings/tokens
+[auto-respect-rbac-for-the-argo-cd-controller]: https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#auto-respect-rbac-for-controller
+[spec-writeconnectionsecrettoref]: /reference/apis/spaces-api/latest
+[auto-respect-rbac-for-the-argo-cd-controller-1]: https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#auto-respect-rbac-for-controller
+[provider-upbound]: https://marketplace.upbound.io/providers/upbound/provider-upbound
+[provider-kubernetes]: https://marketplace.upbound.io/providers/upbound/provider-kubernetes
+[provider-upbound-2]: https://marketplace.upbound.io/providers/upbound/provider-upbound
+[robots]: https://marketplace.upbound.io/providers/upbound/provider-upbound/v0.8.0/resources/iam.upbound.io/Robot/v1alpha1
+[teams]: https://marketplace.upbound.io/providers/upbound/provider-upbound/v0.8.0/resources/iam.upbound.io/Team/v1alpha1
+[repositories]: https://marketplace.upbound.io/providers/upbound/provider-upbound/v0.8.0/resources/repository.upbound.io/Repository/v1alpha1
+[permissions]: https://marketplace.upbound.io/providers/upbound/provider-upbound/v0.8.0/resources/repository.upbound.io/Permission/v1alpha1
+[provider-kubernetes-3]: https://marketplace.upbound.io/providers/upbound/provider-kubernetes
+[object]: https://marketplace.upbound.io/providers/upbound/provider-kubernetes/v0.17.0/resources/kubernetes.crossplane.io/Object/v1alpha2
+[watch]: https://kubernetes.io/docs/reference/using-api/api-concepts/#watch-bookmarks
+[providerconfig]: https://marketplace.upbound.io/providers/upbound/provider-kubernetes/v0.17.0/resources/kubernetes.crossplane.io/ProviderConfig/v1alpha1
+[personal-access-token]: https://accounts.upbound.io/settings/tokens
diff --git a/spaces_versioned_docs/version-1.14/howtos/control-plane-topologies.md b/spaces_versioned_docs/version-1.14/howtos/control-plane-topologies.md
new file mode 100644
index 000000000..11cd5efcf
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/control-plane-topologies.md
@@ -0,0 +1,561 @@
+---
+title: Control Plane Topologies
+sidebar_position: 15
+description: Configure scheduling of composites to remote control planes
+---
+
+
+:::important
+This feature is in private preview for select customers in Upbound Spaces. If you're interested in this deployment mode, please [contact us](https://www.upbound.io/support/contact).
+:::
+
+Upbound's _Control Plane Topology_ feature lets you build and deploy a platform
+of multiple control planes. These control planes work together for a unified platform
+experience.
+
+
+With the _Topology_ feature, you can install resource APIs that are
+reconciled by other control planes and configure the routing that occurs between
+control planes. You can also build compositions that reference other resources
+running on your control plane or elsewhere in Upbound.
+
+This guide explains how to use Control Plane Topology APIs to install, configure
+remote APIs, and build powerful compositions that reference other resources.
+
+## Benefits
+
+The Control Plane Topology feature provides the following benefits:
+
+* Decouple your platform architecture into independent offerings to improve your platform's software development lifecycle.
+* Install composite APIs from Configurations as CRDs which are fulfilled and reconciled by other control planes.
+* Route APIs to other control planes by configuring an _Environment_ resource, which define a set of routable dimensions.
+
+## How it works
+
+
+Imagine the scenario where you want to let a user reference a subnet when creating a database instance. To your control plane, the `kind: database` and `kind: subnet` are independent resources. To you as the composition author, these resources have an important relationship. It may be that:
+
+- you don't want your user to ever be able to create a database without specifying a subnet.
+- you want to let them create a subnet when they create the database, if it doesn't exist.
+- you want to allow them to reuse a subnet that got created elsewhere or gets shared by another user.
+
+In each of these scenarios, you must resort to writing complex composition logic
+to handle each case. The problem is compounded when the resource exists in a
+context separate from the current control plane's context. Imagine a scenario
+where one control plane manages Database resources and a second control plane
+manages networking resources. With the _Topology_ feature, you can offload these
+concerns to Upbound machinery.
+
+
+
+
+## Prerequisites
+
+Enable the Control Plane Topology feature in the Space you plan to run your control plane in:
+
+- Cloud Spaces: Not available yet
+- Connected Spaces: Space administrator must enable this feature
+- Disconnected Spaces: Space administrator must enable this feature
+
+
+
+## Compose resources with _ReferencedObjects_
+
+
+
+_ReferencedObject_ is a resource type available in an Upbound control plane that lets you reference other Kubernetes resources in Upbound.
+
+:::tip
+This feature is useful for composing resources that exist in a
+remote context, like another control plane. You can also use
+_ReferencedObjects_ to resolve references to any other Kubernetes object
+in the current control plane context. This could be a secret, another Crossplane
+resource, or more.
+:::
+
+### Declare the resource reference in your XRD
+
+To compose a _ReferencedObject_, you should start by adding a resource reference
+in your Composite Resource Definition (XRD). The convention for the resource
+reference follows the shape shown below:
+
+```yaml
+Ref:
+ type: object
+ properties:
+ apiVersion:
+ type: string
+ default: ""
+ enum: [ "" ]
+ kind:
+ type: string
+ default: ""
+ enum: [ "" ]
+ grants:
+ type: array
+ default: [ "Observe" ]
+ items:
+ type: string
+ enum: [ "Observe", "Create", "Update", "Delete", "*" ]
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - name
+```
+
+The `Ref` should be the kind of resource you want to reference. The `apiVersion` and `kind` should be the associated API version and kind of the resource you want to reference.
+
+The `name` and `namespace` strings are inputs that let your users specify the resource instance.
+
+#### Grants
+
+The `grants` field is a special array that lets you give users the power to influence the behavior of the referenced resource. You can configure which of the available grants you let your user select and which it defaults to. Similar in behavior as [Crossplane management policies][crossplane-management-policies], each grant value does the following:
+
+- **Observe:** The composite may observe the state of the referenced resource.
+- **Create:** The composite may create the referenced resource if it doesn't exist.
+- **Update:** The composite may update the referenced resource.
+- **Delete:** The composite may delete the referenced resource.
+- **\*:** The composite has full control over the referenced resource.
+
+Here are some examples that show how it looks in practice:
+
+
+
+Show example for defining the reference to another composite resource
+
+```yaml
+apiVersion: apiextensions.crossplane.io/v1
+kind: CompositeResourceDefinition
+metadata:
+ name: xsqlinstances.database.platform.upbound.io
+spec:
+ type: object
+ properties:
+ parameters:
+ type: object
+ properties:
+ networkRef:
+ type: object
+ properties:
+ apiVersion:
+ type: string
+ default: "networking.platform.upbound.io"
+ enum: [ "networking.platform.upbound.io" ]
+ grants:
+ type: array
+ default: [ "Observe" ]
+ items:
+ type: string
+ enum: [ "Observe" ]
+ kind:
+ type: string
+ default: "Network"
+ enum: [ "Network" ]
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - name
+```
+
+
+
+
+
+Show example for defining the reference to a secret
+```yaml
+apiVersion: apiextensions.crossplane.io/v1
+kind: CompositeResourceDefinition
+metadata:
+ name: xsqlinstances.database.platform.upbound.io
+spec:
+ type: object
+ properties:
+ parameters:
+ type: object
+ properties:
+ secretRef:
+ type: object
+ properties:
+ apiVersion:
+ type: string
+ default: "v1"
+ enum: [ "v1" ]
+ grants:
+ type: array
+ default: [ "Observe" ]
+ items:
+ type: string
+ enum: [ "Observe", "Create", "Update", "Delete", "*" ]
+ kind:
+ type: string
+ default: "Secret"
+ enum: [ "Secret" ]
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - name
+```
+
+
+### Manually add the jsonPath
+
+:::important
+This step is a known limitation of the preview. We're working on tooling that
+removes the need for authors to do this step.
+:::
+
+During the preview timeframe of this feature, you must add an annotation by hand
+to the XRD. In your XRD's `metadata.annotations`, set the
+`references.upbound.io/schema` annotation. It should be a JSON string in the
+following format:
+
+```json
+{
+ "apiVersion": "references.upbound.io/v1alpha1",
+ "kind": "ReferenceSchema",
+ "references": [
+ {
+ "jsonPath": ".spec.parameters.secretRef",
+ "kinds": [
+ {
+ "apiVersion": "v1",
+ "kind": "Secret"
+ }
+ ]
+ }
+ ]
+}
+```
+
+Flatten this JSON into a string and set the annotation on your XRD. View the
+example below for an illustration:
+
+
+Show example setting the references.upbound.io/schema annotation
+```yaml
+apiVersion: apiextensions.crossplane.io/v1
+kind: CompositeResourceDefinition
+metadata:
+ name: xthings.networking.acme.com
+ annotations:
+ references.upbound.io/schema: '{"apiVersion":"references.upbound.io/v1alpha1","kind":"ReferenceSchema","references":[{"jsonPath":".spec.secretRef","kinds":[{"apiVersion":"v1","kind":"Secret"}]},{"jsonPath":".spec.configMapRef","kinds":[{"apiVersion":"v1","kind":"ConfigMap"}]}]}'
+```
+
+
+
+Show example for setting multiples references in the references.upbound.io/schema annotation
+```yaml
+apiVersion: apiextensions.crossplane.io/v1
+kind: CompositeResourceDefinition
+metadata:
+ name: xthings.networking.acme.com
+ annotations:
+ references.upbound.io/schema: '{"apiVersion":"references.upbound.io/v1alpha1","kind":"ReferenceSchema","references":[{"jsonPath":".spec.parameters.secretRef","kinds":[{"apiVersion":"v1","kind":"Secret"}]},{"jsonPath":".spec.parameters.configMapRef","kinds":[{"apiVersion":"v1","kind":"ConfigMap"}]}]}'
+```
+
+
+
+You can use a VSCode extension like [vscode-pretty-json][vscode-pretty-json] to make this task easier.
+
+
+### Compose a _ReferencedObject_
+
+To pair with the resource reference declared in your XRD, you must compose the referenced resource. Use the _ReferencedObject_ resource type to bring the resource into your composition. _ReferencedObject_ has the following schema:
+
+```yaml
+apiVersion: references.upbound.io/v1alpha1
+kind: ReferencedObject
+spec:
+ managementPolicies:
+ - Observe
+ deletionPolicy: Orphan
+ composite:
+ apiVersion:
+ kind:
+ name:
+ jsonPath: .spec.parameters.secretRef
+```
+
+The `spec.composite.apiVersion` and `spec.composite.kind` should match the API version and kind of the `compositeTypeRef` declared in your composition. The `spec.composite.name` should be the name of the composite resource instance.
+
+The `spec.composite.jsonPath` should be the path to the root of the resource ref you declared in your XRD.
+
+
+Show example for composing a resource reference to a secret
+
+```yaml
+apiVersion: apiextensions.crossplane.io/v1
+kind: Composition
+metadata:
+ name: demo-composition
+spec:
+ compositeTypeRef:
+ apiVersion: networking.acme.com/v1alpha1
+ kind: XThing
+ mode: Pipeline
+ pipeline:
+ - step: patch-and-transform
+ functionRef:
+ name: crossplane-contrib-function-patch-and-transform
+ input:
+ apiVersion: pt.fn.crossplane.io/v1beta1
+ kind: Resources
+ resources:
+ - name: secret-ref-object
+ base:
+ apiVersion: references.upbound.io/v1alpha1
+ kind: ReferencedObject
+ spec:
+ managementPolicies:
+ - Observe
+ deletionPolicy: Orphan
+ composite:
+ apiVersion: networking.acme.com/v1alpha1
+ kind: XThing
+ name: TO_BE_PATCHED
+ jsonPath: .spec.parameters.secretRef
+ patches:
+ - type: FromCompositeFieldPath
+ fromFieldPath: metadata.name
+ toFieldPath: spec.composite.name
+```
+
+
+By declaring a resource reference in your XRD, Upbound handles resolution of the desired resource.
+
+## Deploy APIs
+
+To configure routing resource requests between control planes, you need to deploy APIs in at least two control planes.
+
+### Deploy into a service-level control plane
+
+Package the APIs you build into a Configuration package an deploy it on a
+control plane in an Upbound Space. In Upbound, it's common to refer to the
+control plane where the Configuration package is deployed as a **service-level
+control plane**. This control plane runs the controllers that processes the API
+requests and provisions underlying resources. In a later section, you learn how
+you can use _Topology_ features to [configure routing][configure-routing].
+
+### Deploy as Remote APIs on a platform control plane
+
+You should use the same package source as deployed in the **service-level
+control planes**, but this time deploy the Configuration in a separate control
+plane as a _RemoteConfiguration_. The _RemoteConfiguration_ installs Kubernetes
+CustomResourceDefinitions for the APIs defined in the Configuration package, but
+no controllers get deployed.
+
+### Install a _RemoteConfiguration_
+
+_RemoteConfiguration_ is a resource type available in an Upbound manage control
+planes that acts like a sort of Crossplane [Configuration][configuration]
+package. Unlike standard Crossplane Configurations, which install XRDs,
+compositions, and functions into a desired control plane, _RemoteConfigurations_
+install only the CRDs for claimable composite resource types.
+
+#### Install directly
+
+Install a _RemoteConfiguration_ by defining the following and applying it to
+your control plane:
+
+```yaml
+apiVersion: pkg.upbound.io/v1alpha1
+kind: RemoteConfiguration
+metadata:
+ name:
+spec:
+ package:
+```
+
+#### Declare as a project dependency
+
+You can declare _RemoteConfigurations_ as dependencies in your control plane's
+[project file][project-file]. Use the up CLI to add the dependency, providing
+the `--remote` flag:
+
+```tsx live
+up dep add --remote
+```
+
+This command adds a declaration in the `spec.apiDependencies` stanza of your
+project's `upbound.yaml` as demonstrated below:
+
+```yaml
+apiVersion: meta.dev.upbound.io/v1alpha1
+kind: Project
+metadata:
+ name: service-controlplane
+spec:
+ apiDependencies:
+ - configuration: xpkg.upbound.io/upbound/remote-configuration
+ version: '>=v0.0.0'
+ dependsOn:
+ - provider: xpkg.upbound.io/upbound/provider-kubernetes
+ version: '>=v0.0.0'
+```
+
+Like a Configuration, a _RemoteConfigurationRevision_ gets created when the
+package gets installed on a control plane. Unlike Configurations, XRDs and
+compositions **don't** get installed by a _RemoteConfiguration_. Only the CRDs
+for claimable composite types get installed and Crossplane thereafter manages
+their lifecycle. You can tell when a CRD gets installed by a
+_RemoteConfiguration_ because it has the `internal.scheduling.upbound.io/remote:
+true` label:
+
+```yaml
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+ name: things.networking.acme.com
+ labels:
+ internal.scheduling.upbound.io/remote: "true"
+```
+
+## Use an _Environment_ to route resources
+
+_Environment_ is a resource type available in Upbound control planes that works
+in tandem with resources installed by _RemoteConfigurations_. _Environment_ is a
+namespace-scoped resource that lets you configure how to route remote resources
+to other control planes by a set of user-defined dimensions.
+
+### Define a routing dimension
+
+To establish a routing dimensions between two control planes, you must do two
+things:
+
+1. Annotate the service control plane with the name and value of a dimension.
+2. Configure an environment on another control plane with a dimension matching the field and value of the service control plane.
+
+The example below demonstrates the creation of a service control plane with a
+`region` dimension:
+
+```yaml
+apiVersion: spaces.upbound.io/v1beta1
+kind: ControlPlane
+metadata:
+ labels:
+ dimension.scheduling.upbound.io/region: "us-east-1"
+ name: prod-1
+ namespace: default
+spec:
+```
+
+Upbound's Spaces controller keeps an inventory of all declared dimensions and
+listens for control planes to route to them.
+
+### Create an _Environment_
+
+Next, create an _Environment_ on a separate control plane, referencing the
+dimension from before. The example below demonstrates routing all remote
+resource requests in the `default` namespace of the control plane based on a
+single `region` dimension:
+
+```yaml
+apiVersion: scheduling.upbound.io/v1alpha1
+kind: Environment
+metadata:
+ name: default
+ namespace: default
+spec:
+ dimensions:
+ region: us-east-1
+```
+
+You can specify whichever dimensions as you want. The example below demonstrates
+multiple dimensions:
+
+```yaml
+apiVersion: scheduling.upbound.io/v1alpha1
+kind: Environment
+metadata:
+ name: default
+ namespace: default
+spec:
+ dimensions:
+ region: us-east-1
+ env: prod
+ offering: databases
+```
+
+In order for the routing controller to match, _all_ dimensions must match for a
+given service control plane.
+
+You can specify dimension overrides on a per-resource group basis. This lets you
+configure default routing rules for a given _Environment_ and override routing
+on a per-offering basis.
+
+```yaml
+apiVersion: scheduling.upbound.io/v1alpha1
+kind: Environment
+metadata:
+ name: default
+ namespace: default
+spec:
+ dimensions:
+ region: us-east-1
+ resourceGroups:
+ - name: database.platform.upbound.io # database
+ dimensions:
+ region: "us-east-1"
+ env: "prod"
+ offering: "databases"
+ - name: networking.platform.upbound.io # networks
+ dimensions:
+ region: "us-east-1"
+ env: "prod"
+ offering: "networks"
+```
+
+### Confirm the configured route
+
+After you create an _Environment_ on a control plane, the routes selected get
+reported in the _Environment's_ `.status.resourceGroups`. This is illustrated
+below:
+
+```yaml
+apiVersion: scheduling.upbound.io/v1alpha1
+kind: Environment
+metadata:
+ name: default
+...
+status:
+ resourceGroups:
+ - name: database.platform.upbound.io # database
+ proposed:
+ controlPlane: ctp-1
+ group: default
+ space: upbound-gcp-us-central1
+ dimensions:
+ region: "us-east-1"
+ env: "prod"
+ offering: "databases"
+```
+
+If you don't see a response in the `.status.resourceGroups`, this indicates a
+match wasn't found or an error establishing routing occurred.
+
+:::tip
+There's no limit to the number of control planes you can route to. You can also
+stack routing and form your own topology of control planes, with multiple layers
+of routing.
+:::
+
+### Limitations
+
+
+Routing from one control plane to another is currently scoped to control planes
+that exist in a single Space. You can't route resource requests to control
+planes that exist on a cross-Space boundary.
+
+
+[project-file]: /manuals/cli/howtos/project
+[contact-us]: https://www.upbound.io/usage/support/contact
+[crossplane-management-policies]: https://docs.crossplane.io/latest/managed-resources/managed-resources/#managementpolicies
+[vscode-pretty-json]: https://marketplace.visualstudio.com/items?itemName=chrismeyers.vscode-pretty-json
+[configure-routing]: #use-an-environment-to-route-resources
+[configuration]: https://docs.crossplane.io/latest/packages/providers
diff --git a/spaces_versioned_docs/version-1.14/howtos/ctp-connector.md b/spaces_versioned_docs/version-1.14/howtos/ctp-connector.md
new file mode 100644
index 000000000..cf4a7b235
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/ctp-connector.md
@@ -0,0 +1,503 @@
+---
+title: Control Plane Connector
+weight: 80
+description: A guide for how to connect a Kubernetes app cluster to a control plane in Upbound using the Control Plane connector feature
+plan: "standard"
+---
+
+
+
+
+Control Plane Connector connects arbitrary Kubernetes application clusters outside the
+Upbound Spaces to your control planes running in Upbound Spaces.
+This lets you interact with your control plane's API from the app cluster. The claim APIs and the namespaced XR APIs
+you define via CompositeResourceDefinitions (XRDs) in the control plane, are available in
+your app cluster alongside Kubernetes workload APIs like Pod. Control Plane Connector
+enables the same experience as a locally installed Crossplane.
+
+
+
+### Control Plane Connector operations
+
+Control Plane Connector leverages the [Kubernetes API AggregationLayer][kubernetes-api-aggregationlayer]
+to create an extension API server and serve the claim APIs and the namespaced XR APIs in the control plane. It
+discovers the claim APIs and the namespaced XR APIs available in the control plane and registers corresponding
+APIService resources on the app cluster. Those APIService resources refer to the
+extension API server of Control Plane Connector.
+
+The claim APIs and the namespaced XR APIs are available in your Kubernetes cluster, just like all native
+Kubernetes APIs.
+
+The Control Plane Connector processes every request targeting the claim APIs and the namespaced XR APIs and makes the
+relevant requests to the connected control plane.
+
+Only the connected control plane stores and processes all claims and namespaced XRs created in the app
+cluster, eliminating any storage use at the application cluster. The control plane
+connector provisions a target namespace at the control plane for the app cluster and stores
+all claims and namespaced XRs in this target namespace.
+
+For managing the claims and namespaced XRs, the Control Plane Connector creates a unique identifier for a
+resource by combining input parameters from claims, including:
+- `metadata.name`
+- `metadata.namespace`
+- `your cluster name`
+
+
+It employs SHA-256 hashing to generate a hash value and then extracts the first
+16 characters of that hash. This ensures the resulting identifier remains within
+the 64-character limit in Kubernetes.
+
+
+
+For instance, if a claim named `my-bucket` exists in the test namespace in
+`cluster-dev`, the system calculates the SHA-256 hash from
+`my-bucket-x-test-x-00000000-0000-0000-0000-000000000000` and takes the first 16
+characters. The control plane side then names the claim `claim-c603e518969b413e`.
+
+For namespaced XRs, the process is similar, only the prefix is different.
+The name becomes `nxr-c603e518969b413e`.
+
+
+### Installation
+
+
+
+
+
+Log in with the up CLI:
+
+```bash
+up login
+```
+
+Connect your app cluster to a namespace in an Upbound control plane with `up controlplane connector install `. This command creates a user token and installs the Control Plane Connector to your cluster. It's recommended you create a values file called `connector-values.yaml` and provide the following below. Select the tab according to which environment your control plane is running in.
+
+
+
+
+
+
+```yaml
+upbound:
+ # This is your org account in Upbound e.g. the name displayed after executing `up org list`
+ account:
+ # This is a personal access token generated in the Upbound Console
+ token:
+
+spaces:
+ # If your control plane is running in Upbound's GCP Cloud Space, else use upbound-aws-us-east-1.spaces.upbound.io
+ host: "upbound-gcp-us-west-1.spaces.upbound.io"
+ insecureSkipTLSVerify: true
+ controlPlane:
+ # The name of the control plane you want the Connector to attach to
+ name:
+ # The control plane group the control plane resides in
+ group:
+ # The namespace within the control plane to sync claims from the app cluster to. NOTE: This must be created before you install the connector.
+ claimNamespace:
+```
+
+
+
+
+
+1. Create a [kubeconfig][kubeconfig] for the control plane. Update your Upbound context to the path for your desired control plane.
+```ini
+up login
+up ctx /upbound-gcp-us-central-1/default/your-control-plane
+up ctx . -f - > context.yaml
+```
+
+2. Write it to a secret in the cluster where you plan to
+install the Control Plane Connector to.
+```ini
+kubectl create secret generic my-controlplane-kubeconfig --from-file=context.yaml
+```
+
+3. Reference this secret in the
+`spaces.controlPlane.kubeconfigSecret` field below.
+
+```yaml
+spaces:
+ controlPlane:
+ # The namespace within the control plane to sync claims from the app cluster to. NOTE: This must be created before you install the connector.
+ claimNamespace:
+ kubeconfigSecret:
+ name: my-controlplane-kubeconfig
+ key: kubeconfig
+```
+
+
+
+
+
+
+Provide the values file above when you run the CLI command:
+
+
+```bash {copy-lines="3"}
+up controlplane connector install my-control-plane my-app-ns-1 --file=connector-values.yaml
+```
+
+The Claim APIs and the namespaced XR APIs from your control plane are now visible in the cluster.
+You can verify this with `kubectl api-resources`.
+
+```bash
+kubectl api-resources
+```
+
+### Uninstall
+
+Disconnect an app cluster that you prior installed the Control Plane Connector on by
+running the following:
+
+```bash
+up ctp connector uninstall
+```
+
+This command uninstalls the helm chart for the Control Plane Connector from an app
+cluster. It moves any claims in the app cluster into the control plane
+at the specified namespace.
+
+:::tip
+Make sure your kubeconfig's current context is pointed at the app cluster where
+you want to uninstall Control Plane Connector from.
+:::
+
+
+
+
+It's recommended you create a values file called `connector-values.yaml` and
+provide the following below. Select the tab according to which environment your
+control plane is running in.
+
+
+
+
+
+
+```yaml
+upbound:
+ # This is your org account in Upbound e.g. the name displayed after executing `up org list`
+ account:
+ # This is a personal access token generated in the Upbound Console
+ token:
+
+spaces:
+ # Upbound GCP US-West-1 upbound-gcp-us-west-1.spaces.upbound.io
+ # Upbound AWS US-East-1 upbound-aws-us-east-1.spaces.upbound.io
+ # Upbound GCP US-Central-1 upbound-gcp-us-central-1.spaces.upbound.io
+ host: ""
+ insecureSkipTLSVerify: true
+ controlPlane:
+ # The name of the control plane you want the Connector to attach to
+ name:
+ # The control plane group the control plane resides in
+ group:
+ # The namespace within the control plane to sync claims from the app cluster to.
+ # NOTE: This must be created before you install the connector.
+ claimNamespace:
+```
+
+
+
+
+Create a [kubeconfig][kubeconfig-1] for the
+control plane. Write it to a secret in the cluster where you plan to
+install the Control Plane Connector to. Reference this secret in the
+`spaces.controlPlane.kubeconfigSecret` field below.
+
+```yaml
+spaces:
+ controlPlane:
+ # The namespace within the control plane to sync claims from the app cluster to. NOTE: This must be created before you install the connector.
+ claimNamespace:
+ kubeconfigSecret:
+ name: my-controlplane-kubeconfig
+ key: kubeconfig
+```
+
+
+
+
+
+
+Provide the values file above when you `helm install` the Control Plane Connector:
+
+
+```bash
+helm install --wait mcp-connector oci://xpkg.upbound.io/spaces-artifacts/mcp-connector -n kube-system -f connector-values.yaml
+```
+:::tip
+Create an API token from the Upbound user account settings page in the console by following [these instructions][these-instructions].
+:::
+
+### Uninstall
+
+You can uninstall Control Plane Connector with Helm by running the following:
+
+```bash
+helm uninstall mcp-connector
+```
+
+
+
+
+
+### Example usage
+
+This example creates a control plane using [Configuration
+EKS][configuration-eks]. `KubernetesCluster` is
+available as a claim API in your control plane. The following is [an
+example][an-example]
+object you can create in your control plane.
+
+```yaml
+apiVersion: k8s.starter.org/v1alpha1
+kind: KubernetesCluster
+metadata:
+ name: my-cluster
+ namespace: default
+spec:
+ id: my-cluster
+ parameters:
+ nodes:
+ count: 3
+ size: small
+ services:
+ operators:
+ prometheus:
+ version: "34.5.1"
+ writeConnectionSecretToRef:
+ name: my-cluster-kubeconfig
+```
+
+After connecting your Kubernetes app cluster to the control plane, you
+can create the `KubernetesCluster` object in your app cluster. Although your
+local cluster has an Object, the actual resources is in your managed control
+plane inside Upbound.
+
+```bash {copy-lines="3"}
+# Applying the claim YAML above.
+# kubectl is set up to talk with your Kubernetes cluster.
+kubectl apply -f claim.yaml
+
+
+kubectl get claim -A
+NAME SYNCED READY CONNECTION-SECRET AGE
+my-cluster True True my-cluster-kubeconfig 2m
+```
+
+Once Kubernetes creates the object, view the console to see your object.
+
+
+
+You can interact with the object through your cluster just as if it
+lives in your cluster.
+
+### Migration to control planes
+
+This guide details the migration of a Crossplane installation to Upbound-managed
+control planes using the Control Plane Connector to manage claims on an application
+cluster.
+
+
+
+#### Export all resources
+
+Before proceeding, ensure that you have set the correct kubecontext for your application
+cluster.
+
+```bash
+up controlplane migration export --pause-before-export --output=my-export.tar.gz --yes
+```
+
+This command performs the following:
+- Pauses all claim, composite, and managed resources before export.
+- Scans the control plane for resource types.
+- Exports Crossplane and native resources.
+- Archives the exported state into `my-export.tar.gz`.
+
+Example output:
+```bash
+Exporting control plane state...
+ ✓ Pausing all claim resources before export... 1 resources paused! ⏸️
+ ✓ Pausing all composite resources before export... 7 resources paused! ⏸️
+ ✓ Pausing all managed resources before export... 34 resources paused! ⏸️
+ ✓ Scanning control plane for types to export... 231 types found! 👀
+ ✓ Exporting 231 Crossplane resources...125 resources exported! 📤
+ ✓ Exporting 3 native resources...19 resources exported! 📤 ✓ Archiving exported state... archived to "my-export.tar.gz"! 📦
+
+Successfully exported control plane state!
+```
+
+#### Import all resources
+
+The system restores the target control plane with the exported
+resources, which serves as the destination for the Control Plane Connector.
+
+
+Log into Upbound and select the correct context:
+
+```bash
+up login
+up ctx
+up ctp create ctp-a
+```
+
+Output:
+```bash
+ctp-a created
+```
+
+Verify that the Crossplane version on both the application cluster and the new managed
+control plane matches the core Crossplane version.
+
+Use the following command to import the resources:
+```bash
+up controlplane migration import -i my-export.tar.gz \
+ --unpause-after-import \
+ --mcp-connector-cluster-id=my-appcluster \
+ --mcp-connector-claim-namespace=my-appcluster
+```
+
+This command:
+- Note: `--mcp-connector-cluster-id` needs to be unique per application cluster
+- Note: `--mcp-connector-claim-namespace` is the namespace the system creates
+ during the import
+- Restores base resources
+- Waits for XRDs and packages to establish
+- Imports Claims, XRs resources
+- Finalizes the import and resumes managed resources
+
+Example output:
+```bash
+Importing control plane state...
+ ✓ Reading state from the archive... Done! 👀
+ ✓ Importing base resources... 56 resources imported!📥
+ ✓ Waiting for XRDs... Established! ⏳
+ ✓ Waiting for Packages... Installed and Healthy! ⏳
+ ✓ Importing remaining resources... 88 resources imported! 📥
+ ✓ Finalizing import... Done! 🎉
+ ✓ Unpausing managed resources ... Done! ▶️
+
+fully imported control plane state!
+```
+
+Verify Imported Claims
+
+
+The Control Plane Connector renames all claims and adds additional labels to them.
+
+```bash
+kubectl get claim -A
+```
+
+Example output:
+```bash
+NAMESPACE NAME SYNCED READY CONNECTION-SECRET AGE
+my-appcluster cluster.aws.platformref.upbound.io/claim-e708ff592b974f51 True True platform-ref-aws-kubeconfig 3m17s
+```
+
+Inspect the labels:
+```bash
+kubectl get -n my-appcluster cluster.aws.platformref.upbound.io/claim-e708ff592b974f51 -o yaml | yq .metadata.labels
+```
+
+Example output:
+```bash
+mcp-connector.upbound.io/app-cluster: my-appcluster
+mcp-connector.upbound.io/app-namespace: default
+mcp-connector.upbound.io/app-resource-name: example
+```
+
+#### Cleanup the app cluster
+
+Remove all Crossplane-related resources from the application cluster, including:
+
+- Managed Resources
+- Claims
+- Compositions
+- XRDs
+- Packages (Functions, Configurations, Providers)
+- Crossplane and all associated CRDs
+
+
+#### Install Control Plane Connector
+
+
+Follow the preceding installation guide and configure the `connector-values.yaml`:
+
+```yaml
+# NOTE: clusterID needs to match --mcp-connector-cluster-id used in the import on the managed control Plane
+clusterID: my-appcluster
+upbound:
+ account:
+ token:
+
+spaces:
+ host: ""
+ insecureSkipTLSVerify: true
+ controlPlane:
+ name:
+ group:
+ # NOTE: This is the --mcp-connector-claim-namespace used during the import to the control plane
+ claimNamespace:
+```
+Once the Control Plane Connector installs, verify that resources exist in the application
+cluster:
+
+```bash
+kubectl api-resources | grep platform
+```
+
+Example output:
+```bash
+awslbcontrollers aws.platform.upbound.io/v1alpha1 true AWSLBController
+podidentities aws.platform.upbound.io/v1alpha1 true PodIdentity
+sqlinstances aws.platform.upbound.io/v1alpha1 true SQLInstance
+clusters aws.platformref.upbound.io/v1alpha1 true Cluster
+osss observe.platform.upbound.io/v1alpha1 true Oss
+apps platform.upbound.io/v1alpha1 true App
+```
+
+Restore claims from the control plane to the application cluster:
+
+```bash
+kubectl get claim -A
+```
+
+Example output:
+```bash
+NAMESPACE NAME SYNCED READY CONNECTION-SECRET AGE
+default cluster.aws.platformref.upbound.io/example True True platform-ref-aws-kubeconfig 127m
+```
+
+With this guide, you migrated your Crossplane installation to
+Upbound-control planes. This ensures seamless integration with your
+application cluster using the Control Plane Connector.
+
+### Connect multiple app clusters to a control plane
+
+Claims are store in a unique namespace in the Upbound control plane.
+Every cluster creates a new control plane namespace.
+
+
+
+There's no limit on the number of clusters connected to a single control plane.
+Control plane operators can see all their infrastructure in a central control
+plane.
+
+Without using control planes and Control Plane Connector, users have to install
+Crossplane and providers for cluster. Each cluster requires configuration for
+providers with necessary credentials. With a single control plane where multiple
+clusters connected through Upbound tokens, you don't need to give out any cloud
+credentials to the clusters.
+
+
+[kubeconfig]: /manuals/cli/howtos/context-config/#generate-a-kubeconfig-for-a-control-plane-in-a-group
+[kubeconfig-1]:/spaces/concepts/control-planes/#connect-directly-to-your-control-plane
+[these-instructions]:/manuals/console/#create-a-personal-access-token
+[kubernetes-api-aggregationlayer]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/
+[configuration-eks]: https://github.com/upbound/configuration-eks
+[an-example]: https://github.com/upbound/configuration-eks/blob/9f86b6d/.up/examples/cluster.yaml
diff --git a/spaces_versioned_docs/version-1.14/howtos/debugging-a-ctp.md b/spaces_versioned_docs/version-1.14/howtos/debugging-a-ctp.md
new file mode 100644
index 000000000..85a2ca688
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/debugging-a-ctp.md
@@ -0,0 +1,123 @@
+---
+title: Debugging issues on a control plane
+sidebar_position: 70
+description: A guide for how to debug resources on a control plane running in Upbound.
+---
+
+This guide provides troubleshooting guidance for how to identify and fix issues on a control plane.
+
+
+## Start from Upbound Console
+
+
+The Upbound [Console][console] has a built-in control plane explorer experience
+that surfaces status and events for the resources on your control plane. The
+explorer is claim-based. Resources in this view exist only if they exist in the
+reference chain originating from a claim. This view is a helpful starting point
+if you are attempting to debug an issue originating from a claim.
+
+:::tip
+If you directly create Crossplane Managed Resources (`MR`s) or Composite
+Resources (`XR`s), they won't render in the explorer.
+:::
+
+### Example
+
+The example below uses the control plane explorer view to inspect why a claim for an EKS Cluster isn't healthy.
+
+#### Check the health status of claims
+
+From the API type card, two claims branching from of it: one shows a healthy green icon, while the other shows an unhealthy red icon.
+
+
+
+Select `More details` on the unhealthy claim card and Upbound shows details for the claim.
+
+
+
+Looking at the three events for this claim:
+
+- **ConfigureCompositeResource**: this event indicates Upbound created the claimed Composite Resource (`XR`).
+
+- **BindCompositeResource**: this indicates the Composite Resource (`XR`) that's being "claimed" isn't ready yet. A claim doesn't show `HEALTHY` until the XR it references is ready.
+
+- **ConfigureCompositeResource**: the error saying, `cannot apply composite resource...the object has been modified; please apply your changes to the latest version and try again` is a generic event from Crossplane resources. It's safe to ignore this error.
+
+Next, look at the `status` field of the rendered YAML for the resource.
+
+
+
+The status reports a similar message as the event stream: this claim is waiting for a Composite Resource to be ready. Based on this, investigate the Composite Resource referenced by this claim next.
+
+#### Check the health status of the Composite Resource
+
+
+The control plane explorer only shows the claim cards by default. Selecting the claim card renders the rest of the Crossplane resource tree associated with the selected claim.
+
+
+The previous claim expands into this screenshot:
+
+
+
+This renders the XR referenced by the claim (along with all its references). You can see the XR is showing the same unhealthy status icon in its card. Notice the XR has itself two nested XRs. One of the nested XRs shows a healthy green icon on its card, while the other shows an unhealthy red icon. Like the claim, a Composite Resource doesn't show healthy until all referenced resources also show healthy.
+
+#### Inspecting Managed Resources
+
+Select `more details` to inspect one of the unhealthy Managed Resources shows the following:
+
+
+
+This event reveals it's unhealthy because it's waiting on a reference to another Managed Resource. Searching the rendered YAML of the MR for this resource shows the following:
+
+
+
+The rendered YAML shows this MR is referencing a sibling MR that shares the same controller. The same parent XR created both of these managed resources. Inspect the sibling MR to see what its status is.
+
+
+
+The sibling MR event stream shows the Provider processed the resource create request. Ignore the `CannotInitalizeManagedResrouce` event. EKS clusters can take 15 minutes or more to provision in AWS. The root cause is everything is fine -- all the resources are still provisioning. Waiting longer and then looking at the control plane explorer again, shows all resources are healthy. reference, below is an example status field for a resource that's healthy and provisioned.
+
+```yaml
+...
+status:
+ atProvider:
+ id: team-b-app-cluster-bhwfb-hwtgs-20230403135452772300000008
+ conditions:
+ - lastTransitionTime: '2023-04-03T13:56:35Z'
+ reason: Available
+ status: 'True'
+ type: Ready
+ - lastTransitionTime: '2023-04-03T13:54:02Z'
+ reason: ReconcileSuccess
+ status: 'True'
+ type: Synced
+ - lastTransitionTime: '2023-04-03T13:54:53Z'
+ reason: Success
+ status: 'True'
+ type: LastAsyncOperation
+ - lastTransitionTime: '2023-04-03T13:54:53Z'
+ reason: Finished
+ status: 'True'
+ type: AsyncOperation
+```
+
+### Control plane explorer limitations
+
+The control plane explorer view is currently designed around claims (`XC`s). The control plane explorer doesn't inspect other Crossplane resources. To inspect other Crossplane resources, use the `up` CLI.
+
+Some examples of Crossplane resources that require the `up` CLI
+
+- Managed Resources that aren't associated with a claim
+- Composite Resources that aren't associated with a claim
+- The status of _deleting_ resources
+- ProviderConfigs
+- Provider events
+
+## Use direct CLI access
+
+If your preference is to use a terminal instead of a GUI, Upbound supports direct access to the API server of the control plane. Use [`up ctx`][up-ctx] to connect directly to your control plane.
+
+
+[console]: /manuals/console/upbound-console
+[up-ctx]: /reference/cli-reference
diff --git a/spaces_versioned_docs/version-1.14/howtos/managed-service.md b/spaces_versioned_docs/version-1.14/howtos/managed-service.md
new file mode 100644
index 000000000..40b983a76
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/managed-service.md
@@ -0,0 +1,23 @@
+---
+title: Managed Upbound control planes
+description: "Learn about the managed service capabilities of a Space"
+sidebar_position: 10
+---
+
+Control planes in Upbound are fully isolated [Upbound Crossplane][uxp] instances
+that Upbound manages for you. This means:
+
+- the underlying lifecycle of infrastructure (compute, memory, and storage) required to power your instance.
+- scaling of the infrastructure.
+- the maintenance of the core Upbound Crossplane components that make up a control plane.
+
+This lets users focus on building their APIs and operating their control planes,
+while Upbound handles the rest. Each control plane has its own dedicated API
+server connecting users to their control plane.
+
+## Learn about Upbound control planes
+
+Read the [concept][ctp-concept] documentation to learn about Upbound control planes.
+
+[uxp]: /manuals/uxp/overview
+[ctp-concept]: /spaces/concepts/control-planes
\ No newline at end of file
diff --git a/spaces_versioned_docs/version-1.14/howtos/mcp-connector-guide.md b/spaces_versioned_docs/version-1.14/howtos/mcp-connector-guide.md
new file mode 100644
index 000000000..98b64cf15
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/mcp-connector-guide.md
@@ -0,0 +1,164 @@
+---
+title: Consume control plane APIs in an app cluster with control plane connector
+sidebar_position: 99
+description: A tutorial to configure a Space with Argo to declaratively create and
+ manage control planes
+---
+
+In this tutorial, you learn how to configure a Kubernetes app cluster to communicate with a control plane in an Upbound self-hosted Space.
+
+
+The [control plane connector][control-plane-connector] bridges your Kubernetes application clusters---running outside of Upbound--to your control planes running in Upbound. This allows you to interact with your control plane's API right from the app cluster. The claim APIs you define via `CompositeResourceDefinitions` are available alongside Kubernetes workload APIs like `Pod`. In effect, control plane connector provides the same experience as a locally installed Crossplane.
+
+## Prerequisites
+
+To complete this tutorial, you need the following:
+
+- Have already deployed an Upbound Space.
+- Have already deployed an Kubernetes cluster (referred to as `app cluster`).
+
+## Create a control plane
+
+Create a new control plane in your self-hosted Space. Run the following command in a terminal:
+
+```bash
+up ctp create my-control-plane
+```
+
+Once the control plane is ready, connect to it.
+
+```bash
+up ctp connect my-control-plane
+```
+
+For convenience, install a an Upbound [platform reference Configuration][platform-reference-configuration] from the marketplace. production scenarios, replace this with your own Crossplane Configurations or compositions.
+
+```bash
+up ctp configuration install xpkg.upbound.io/upbound/platform-ref-aws:v1.4.0
+```
+
+## Fetch the control plane's connection details
+
+Run the following command in a terminal:
+
+```shell
+kubectl get secret kubeconfig-my-control-plane -n default -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-my-control-plane.yaml
+```
+
+This command saves the kubeconfig for the control plane to a file in your working directory.
+
+## Install control plane connector in your app cluster
+
+Switch contexts to your Kubernetes app cluster. To install the control plane connector in your app cluster, you must first provide a secret containing your control plane's kubeconfig at install-time. Run the following command in a terminal:
+
+:::important
+Make sure the following commands are executed against your **app cluster**, not your control plane.
+:::
+
+```bash
+kubectl create secret generic kubeconfig-my-control-plane -n kube-system --from-file=kubeconfig=./kubeconfig-my-control-plane.yaml
+```
+
+Set the environment variable below to configure which namespace _in your control plane_ you wish to sync the app cluster's claims to.
+
+```shell
+export CONNECTOR_CTP_NAMESPACE=app-cluster-1
+```
+
+Install the Control Plane Connector in the app cluster and point it to your control plane.
+
+```bash
+up ctp connector install my-control-plane $CONNECTOR_CTP_NAMESPACE --control-plane-secret=kubeconfig-my-control-plane
+```
+
+## Inspect your app cluster
+
+After you install Control Plane Connector in the app cluster, you can now see APIs which live on the control plane. You can confirm this is the case by running the following command on your app cluster:
+
+```bash {copy-lines="1"}
+kubectl api-resources | grep upbound
+
+# The output should look like this:
+sqlinstances aws.platform.upbound.io/v1alpha1 true SQLInstance
+clusters aws.platformref.upbound.io/v1alpha1 true Cluster
+osss observe.platform.upbound.io/v1alpha1 true Oss
+apps platform.upbound.io/v1alpha1 true App
+```
+
+## Claim a database instance on your app cluster
+
+Create a database claim against the `SQLInstance` API and observe resources get created by your control plane. Apply the following resources to your app cluster:
+
+```yaml
+cat < --output
+ ```
+
+ The command exports your existing Crossplane control plane configuration/state into an archive file.
+
+::: note
+By default, the export command doesn't make any changes to your existing Crossplane control plane state, leaving it intact. Use the `--pause-before-export` flag to pause the reconciliation on managed resources before exporting the archive file.
+
+This safety mechanism ensures the control plane you migrate state to doesn't assume ownership of resources before you're ready.
+:::
+
+2. Use the control plane [create command][create-command] to create a managed
+control plane in Upbound:
+
+ ```bash
+ up controlplane create my-controlplane
+ ```
+
+3. Use [`up ctx`][up-ctx] to connect to the control plane created in the previous step:
+
+ ```bash
+ up ctx "///my-controlplane"
+ ```
+
+ The command configures your local `kubeconfig` to connect to the control plane.
+
+4. Run the following command to import the archive file into the control plane:
+
+ ```bash
+ up controlplane migration import --input
+ ```
+
+:::note
+By default, the import command leaves the control plane in an inactive state by pausing the reconciliation on managed
+resources. This pause gives you an opportunity to review the imported configuration/state before activating the control plane.
+Use the `--unpause-after-import` flag to change the default behavior and activate the control plane immediately after
+importing the archive file.
+:::
+
+
+
+5. Review and validate the imported configuration/state. When you are ready, activate your managed
+ control plane by running the following command:
+
+ ```bash
+ kubectl annotate managed --all crossplane.io/paused-
+ ```
+
+ At this point, you can delete the source Crossplane control plane.
+
+## CLI options
+
+### Filtering
+
+The migration tool captures the state of a Control Plane. The only filtering
+supported is Kubernetes namespace and Kubernetes resource Type filtering.
+
+You can exclude namespaces using the `--exclude-namespaces` CLI option. This can prevent the CLI from including unwanted resources in the export.
+
+```bash
+--exclude-namespaces=kube-system,kube-public,kube-node-lease,local-path-storage,...
+
+# A list of specific namespaces to exclude from the export. Defaults to 'kube-system', 'kube-public','kube-node-lease', and 'local-path-storage'.
+```
+
+You can exclude Kubernetes Resource types by using the `--exclude-resources` CLI option:
+
+```bash
+--exclude-resources=EXCLUDE-RESOURCES,...
+
+# A list of resource types to exclude from the export in "resource.group" format. No resources are excluded by default.
+```
+
+For example, here's an example for excluding the CRDs installed by Crossplane functions (since they're not needed):
+
+```bash
+up controlplane migration export \
+ --exclude-resources=gotemplates.gotemplating.fn.crossplane.io,kclinputs.template.fn.crossplane.io
+```
+
+:::warning
+You must specify resource names in lowercase "resource.group" format (for example, `gotemplates.gotemplating.fn.crossplane.io`). Using only the resource kind (for example, `GoTemplate`) isn't supported.
+:::
+
+
+:::tip Function Input CRDs
+
+Exclude function input CRDs (`inputs.template.fn.crossplane.io`, `resources.pt.fn.crossplane.io`, `gotemplates.gotemplating.fn.crossplane.io`, `kclinputs.template.fn.crossplane.io`) from migration exports. Upbound automatically recreates these resources during import. Function input CRDs typically have owner references to function packages and may have restricted RBAC access. Upbound installs these CRDs during the import when function packages are restored.
+
+:::
+
+
+After export, users can also change the archive file to only include necessary resources.
+
+### Export non-Crossplane resources
+
+Use the `--include-extra-resources=` CLI option to select other CRD types to include in the export.
+
+### Set the kubecontext
+
+Currently `--context` isn't supported in the migration CLI. You should be able to use the `--kubeconfig` CLI option to use a file that's set to the correct context. example:
+
+```bash
+up controlplane migration export --kubeconfig
+```
+
+Use this in tandem with `up ctx` to export a control plane's kubeconfig:
+
+```bash
+up ctx --kubeconfig ~/.kube/config
+
+# To list the current contet
+up ctx . --kubeconfig ~/.kube/config
+```
+
+## Export archive
+
+The migration CLI exports an archive upon successful completion. Below is an example export of a control plane that excludes several CRD types and skips the confirmation prompt. A file gets written to the working directory, unless you select another output file:
+
+
+
+View the example export
+
+```bash
+$ up controlplane migration export --exclude-resources=gotemplates.gotemplating.fn.crossplane.io,kclinputs.template.fn.crossplane.io --yes
+Exporting control plane state...
+✓ Scanning control plane for types to export... 121 types found! 👀
+✓ Exporting 121 Crossplane resources...60 resources exported! 📤
+✓ Exporting 3 native resources...8 resources exported! 📤
+✓ Archiving exported state... archived to "xp-state.tar.gz"! 📦
+```
+
+
+
+
+When an export occurs, a file named `xp-state.tar.gz` by default gets created in the working directory. You can unzip the file and all the contents of the export are all text YAML files.
+
+- Each CRD (for example `vpcs.ec2.aws.upbound.io`) gets its own directory
+which contains:
+ - A `metadata.yaml` file that contains Kubernetes Object Metadata
+ - A list of Kubernetes Categories the resource belongs to
+- A `cluster` directory that contains YAML manifests for all resources provisioned
+using the CRD.
+
+Sample contents for a Cluster with a single `XNetwork` Composite from
+[configuration-aws-network][configuration-aws-network] is show below:
+
+
+
+
+View the example cluster content
+
+```bash
+├── compositionrevisions.apiextensions.crossplane.io
+│ ├── cluster
+│ │ ├── kcl.xnetworks.aws.platform.upbound.io-4ca6a8a.yaml
+│ │ └── xnetworks.aws.platform.upbound.io-9859a34.yaml
+│ └── metadata.yaml
+├── configurations.pkg.crossplane.io
+│ ├── cluster
+│ │ └── configuration-aws-network.yaml
+│ └── metadata.yaml
+├── deploymentruntimeconfigs.pkg.crossplane.io
+│ ├── cluster
+│ │ └── default.yaml
+│ └── metadata.yaml
+├── export.yaml
+├── functions.pkg.crossplane.io
+│ ├── cluster
+│ │ ├── crossplane-contrib-function-auto-ready.yaml
+│ │ ├── crossplane-contrib-function-go-templating.yaml
+│ │ └── crossplane-contrib-function-kcl.yaml
+│ └── metadata.yaml
+├── internetgateways.ec2.aws.upbound.io
+│ ├── cluster
+│ │ └── borrelli-backup-test-xgl4q.yaml
+│ └── metadata.yaml
+├── mainroutetableassociations.ec2.aws.upbound.io
+│ ├── cluster
+│ │ └── borrelli-backup-test-t2qh7.yaml
+│ └── metadata.yaml
+├── namespaces
+│ └── cluster
+│ ├── crossplane-system.yaml
+│ ├── default.yaml
+│ └── upbound-system.yaml
+├── providerconfigs.aws.upbound.io
+│ ├── cluster
+│ │ └── default.yaml
+│ └── metadata.yaml
+├── providerconfigusages.aws.upbound.io
+│ ├── cluster
+│ │ ├── 0a2a3ec6-ef13-45f9-9cf0-63af7f4a6b6b.yaml
+...redacted
+│ │ └── f7092b0f-3a78-4bfe-82c8-57e5085a9b11.yaml
+│ └── metadata.yaml
+├── providers.pkg.crossplane.io
+│ ├── cluster
+│ │ ├── upbound-provider-aws-ec2.yaml
+│ │ └── upbound-provider-family-aws.yaml
+│ └── metadata.yaml
+├── routes.ec2.aws.upbound.io
+│ ├── cluster
+│ │ └── borrelli-backup-test-dt9cj.yaml
+│ └── metadata.yaml
+├── routetableassociations.ec2.aws.upbound.io
+│ ├── cluster
+│ │ ├── borrelli-backup-test-mr2sd.yaml
+│ │ ├── borrelli-backup-test-ngq5h.yaml
+│ │ ├── borrelli-backup-test-nrkgg.yaml
+│ │ └── borrelli-backup-test-wq752.yaml
+│ └── metadata.yaml
+├── routetables.ec2.aws.upbound.io
+│ ├── cluster
+│ │ └── borrelli-backup-test-dv4mb.yaml
+│ └── metadata.yaml
+├── secrets
+│ └── namespaces
+│ ├── crossplane-system
+│ │ ├── cert-token-signing-gateway-pub.yaml
+│ │ ├── mxp-hostcluster-certs.yaml
+│ │ ├── package-pull-secret.yaml
+│ │ └── xgql-tls.yaml
+│ └── upbound-system
+│ └── aws-creds.yaml
+├── securitygrouprules.ec2.aws.upbound.io
+│ ├── cluster
+│ │ ├── borrelli-backup-test-472f4.yaml
+│ │ └── borrelli-backup-test-qftmw.yaml
+│ └── metadata.yaml
+├── securitygroups.ec2.aws.upbound.io
+│ ├── cluster
+│ │ └── borrelli-backup-test-w5jch.yaml
+│ └── metadata.yaml
+├── storeconfigs.secrets.crossplane.io
+│ ├── cluster
+│ │ └── default.yaml
+│ └── metadata.yaml
+├── subnets.ec2.aws.upbound.io
+│ ├── cluster
+│ │ ├── borrelli-backup-test-8btj6.yaml
+│ │ ├── borrelli-backup-test-gbmrm.yaml
+│ │ ├── borrelli-backup-test-m7kh7.yaml
+│ │ └── borrelli-backup-test-nttt5.yaml
+│ └── metadata.yaml
+├── vpcs.ec2.aws.upbound.io
+│ ├── cluster
+│ │ └── borrelli-backup-test-7hwgh.yaml
+│ └── metadata.yaml
+└── xnetworks.aws.platform.upbound.io
+├── cluster
+│ └── borrelli-backup-test.yaml
+└── metadata.yaml
+43 directories, 87 files
+```
+
+
+
+
+The `export.yaml` file contains metadata about the export, including the configuration of the export, Crossplane information, and what's included in the export bundle.
+
+
+
+View the export
+
+```yaml
+version: v1alpha1
+exportedAt: 2025-01-06T17:39:53.173222Z
+options:
+ excludedNamespaces:
+ - kube-system
+ - kube-public
+ - kube-node-lease
+ - local-path-storage
+ includedResources:
+ - namespaces
+ - configmaps
+ - secrets
+ excludedResources:
+ - gotemplates.gotemplating.fn.crossplane.io
+ - kclinputs.template.fn.crossplane.io
+crossplane:
+ distribution: universal-crossplane
+ namespace: crossplane-system
+ version: 1.17.3-up.1
+ featureFlags:
+ - --enable-provider-identity
+ - --enable-environment-configs
+ - --enable-composition-functions
+ - --enable-usages
+stats:
+ total: 68
+ nativeResources:
+ configmaps: 0
+ namespaces: 3
+ secrets: 5
+ customResources:
+ amicopies.ec2.aws.upbound.io: 0
+ amilaunchpermissions.ec2.aws.upbound.io: 0
+ amis.ec2.aws.upbound.io: 0
+ availabilityzonegroups.ec2.aws.upbound.io: 0
+ capacityreservations.ec2.aws.upbound.io: 0
+ carriergateways.ec2.aws.upbound.io: 0
+ compositeresourcedefinitions.apiextensions.crossplane.io: 0
+ compositionrevisions.apiextensions.crossplane.io: 2
+ compositions.apiextensions.crossplane.io: 0
+ configurationrevisions.pkg.crossplane.io: 0
+ configurations.pkg.crossplane.io: 1
+...redacted
+```
+
+
+
+### Skipped resources
+
+Along with to the resources excluded via CLI options, the following resources aren't
+included in the backup:
+
+- The `kube-root-ca.crt` ConfigMap, since this is cluster-specific
+- Resources directly managed via Helm (ArgoCD's helm implementation, which templates
+Helm resources and then applies them, get included in the backup). The migration creates the exclusion list by looking for:
+ - Any Resource with the label `"app.kubernetes.io/managed-by" == "Helm"`
+ - Kubernetes Secrets with the label prefix `helm.sh/release`. example, `helm.sh/release.v1`
+- Resources installed via a Crossplane package. These have an `ownerReference` with
+a prefix `pkg.crossplane.io`. The expectation is that during import, the Crossplane Package Manager bears responsibility for installing the resources.
+- Crossplane Locks: Any `Lock.pkg.crossplane.io` resource isn't included in the
+export.
+
+## Restore
+
+The following is an example of a successful import run. At the end of the import, all Managed Resources are in a paused state.
+
+
+
+View the migration import
+
+```bash
+$ up controlplane migration import
+Importing control plane state...
+✓ Reading state from the archive... Done! 👀
+✓ Importing base resources... 18 resources imported! 📥
+✓ Waiting for XRDs... Established! ⏳
+✓ Waiting for Packages... Installed and Healthy! ⏳
+✓ Importing remaining resources... 50 resources imported! 📥
+✓ Finalizing import... Done! 🎉
+```
+
+
+
+Your scenario may involve migrating resources which already exist through other automation on the platform. When executing an import in these circumstances, the importer applies the new manifests to the cluster. If the resource already exists, the restore sets fields to what's in the backup.
+
+The importer restores all resources in the export archive. Managed Resources get imported with the `crossplane.io/paused: "true"` annotation set. Use the `--unpause-after-import` CLI argument to automatically un-pause resources that got
+paused during backup, or remove the annotation manually.
+
+### Restore order
+
+The importer restores based on Kubernetes types. The restore order doesn't include parent/child relationships.
+
+Because Crossplane Composites create new Managed Resources if not present on the cluster, all
+Claims, Composites and Managed Resources get imported in a paused state. You can un-pause them after the restore completes.
+
+The first step of import is installing Base Resources into the cluster. These resources (such has
+packages and XRDs) must be ready before proceeding with the import.
+Base Resources are:
+
+- Kubernetes Resources
+ - ConfigMaps
+ - Namespaces
+ - Secrets
+- Crossplane Resources
+ - ControllerConfigs: `controllerconfigs.pkg.crossplane.io`
+ - DeploymentRuntimeConfigs: `deploymentruntimeconfigs.pkg.crossplane.io`
+ - StoreConfigs: `storeconfigs.secrets.crossplane.io`
+- Crossplane Packages
+ - Providers: `providers.pkg.crossplane.io`
+ - Functions: `functions.pkg.crossplane.io`
+ - Configurations: `configurations.pkg.crossplane.io`
+
+Restore waits for the base resources to be `Ready` before moving on to the next step. Next, restore walks through the archive and restores all the manifests present.
+
+During import, the `crossplane.io/paused` annotation gets added to Managed Resources, Claims
+and Composites.
+
+To manually un-pause managed resources after an import, remove the annotation by running:
+
+```bash
+kubectl annotate managed --all crossplane.io/paused-
+```
+
+You can also run import again with the `--unpause-after-import` flag to remove the annotations.
+
+```bash
+up controlplane migration import --unpause-after-import
+```
+
+### Restoring resource status
+
+The importer applies the status of all resources during import. The importer determines if the CRD version has a status field defined based on the stored CRD version.
+
+
+[cli-command]: /reference/cli-reference
+[up-cli]: /reference/cli-reference
+[up-cli-1]: /manuals/cli/overview
+[create-command]: /reference/cli-reference
+[up-ctx]: /reference/cli-reference
+[configuration-aws-network]: https://marketplace.upbound.io/configurations/upbound/configuration-aws-network
diff --git a/spaces_versioned_docs/version-1.14/howtos/observability.md b/spaces_versioned_docs/version-1.14/howtos/observability.md
new file mode 100644
index 000000000..0ddd1f966
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/observability.md
@@ -0,0 +1,375 @@
+---
+title: Observability
+sidebar_position: 50
+description: A guide for how to use the integrated observability pipeline feature
+ in a Space.
+plan: "enterprise"
+---
+
+
+
+This guide explains how to configure observability in Upbound Spaces. Upbound
+provides integrated observability features built on
+[OpenTelemetry][opentelemetry] to collect, process, and export logs, metrics,
+and traces.
+
+Upbound Spaces offers two levels of observability:
+
+1. **Space-level observability** - Observes the cluster infrastructure where Spaces software is installed (Self-Hosted only)
+2. **Control plane observability** - Observes workloads running within individual control planes
+
+
+
+
+
+
+:::important
+**Space-level observability** (available since v1.6.0, GA in v1.14.0):
+- Disabled by default
+- Requires manual enablement and configuration
+- Self-Hosted Spaces only
+
+**Control plane observability** (available since v1.13.0, GA in v1.14.0):
+- Enabled by default
+- No additional configuration required
+:::
+
+
+
+
+## Prerequisites
+
+
+**Control plane observability** is enabled by default. No additional setup is
+required.
+
+
+
+### Self-hosted Spaces
+
+1. **Enable the observability feature** when installing Spaces:
+ ```bash
+ up space init --token-file="${SPACES_TOKEN_PATH}" "v${SPACES_VERSION}" \
+ ...
+ --set "observability.enabled=true"
+ ```
+
+Set `features.alpha.observability.enabled=true` instead if using Spaces version
+before `v1.14.0`.
+
+2. **Install OpenTelemetry Operator** (required for Space-level observability):
+ ```bash
+ kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/download/v0.116.0/opentelemetry-operator.yaml
+ ```
+
+ :::important
+ If running Spaces `v1.11` or later, use OpenTelemetry Operator `v0.110.0` or later due to breaking changes.
+ :::
+
+
+## Space-level Observability
+
+Space-level observability is only available for self-hosted Spaces and allows
+administrators to observe the cluster infrastructure.
+
+### Configuration
+
+Configure Space-level observability using the `spacesCollector` value in your
+Spaces Helm chart:
+
+```yaml
+observability:
+ spacesCollector:
+ config:
+ exporters:
+ otlphttp:
+ endpoint: ""
+ headers:
+ api-key: YOUR_API_KEY
+ exportPipeline:
+ logs:
+ - otlphttp
+ metrics:
+ - otlphttp
+```
+
+This configuration exports metrics and logs from:
+
+- Crossplane installation
+- Spaces infrastructure (controller, API, router, etc.)
+
+### Router metrics
+
+The Spaces router uses Envoy as a reverse proxy and automatically exposes
+metrics when you enable Space-level observability. These metrics provide
+visibility into:
+
+- Traffic routing to control planes and services
+- Request status codes, timeouts, and retries
+- Circuit breaker state preventing cascading failures
+- Client connection patterns and request volume
+- Request latency (P50, P95, P99)
+
+For more information about available metrics, example queries, and how to enable
+this feature, see the [Space-level observability guide][space-level-o11y].
+
+## Control plane observability
+
+Control plane observability collects telemetry data from workloads running
+within individual control planes using `SharedTelemetryConfig` resources.
+
+The pipeline deploys [OpenTelemetry Collectors][opentelemetry-collectors] per
+control plane, defined by a `SharedTelemetryConfig` at the group level.
+Collectors pass data to external observability backends.
+
+:::important
+From Spaces `v1.13` and beyond, telemetry only includes user-facing control
+plane workloads (Crossplane, providers, functions).
+
+Self-hosted users can include system workloads (`api-server`, `etcd`) by setting
+`observability.collectors.includeSystemTelemetry=true` in Helm.
+:::
+
+:::important
+Spaces validates `SharedTelemetryConfig` resources before applying them by
+sending telemetry to configured exporters. self-hosted Spaces, ensure that
+`spaces-controller` can reach the exporter endpoints.
+:::
+
+### `SharedTelemetryConfig`
+
+`SharedTelemetryConfig` is a group-scoped custom resource that defines telemetry
+configuration for control planes.
+
+#### New Relic example
+
+```yaml
+apiVersion: observability.spaces.upbound.io/v1alpha1
+kind: SharedTelemetryConfig
+metadata:
+ name: newrelic
+ namespace: default
+spec:
+ controlPlaneSelector:
+ labelSelectors:
+ - matchLabels:
+ org: foo
+ exporters:
+ otlphttp:
+ endpoint: https://otlp.nr-data.net
+ headers:
+ api-key: YOUR_API_KEY
+ exportPipeline:
+ metrics: [otlphttp]
+ traces: [otlphttp]
+ logs: [otlphttp]
+```
+
+#### Datadog Example
+
+```yaml
+apiVersion: observability.spaces.upbound.io/v1alpha1
+kind: SharedTelemetryConfig
+metadata:
+ name: datadog
+ namespace: default
+spec:
+ controlPlaneSelector:
+ labelSelectors:
+ - matchLabels:
+ org: foo
+ exporters:
+ datadog:
+ api:
+ site: ${DATADOG_SITE}
+ key: ${DATADOG_API_KEY}
+ exportPipeline:
+ metrics: [datadog]
+ traces: [datadog]
+ logs: [datadog]
+```
+
+### Control plane selection
+
+Use `spec.controlPlaneSelector` to specify which control planes should use the
+telemetry configuration.
+
+#### Label-based selection
+
+```yaml
+spec:
+ controlPlaneSelector:
+ labelSelectors:
+ - matchLabels:
+ environment: production
+```
+
+#### Expression-based selection
+
+```yaml
+spec:
+ controlPlaneSelector:
+ labelSelectors:
+ - matchExpressions:
+ - { key: environment, operator: In, values: [production,staging] }
+```
+
+#### Name-based selection
+
+```yaml
+spec:
+ controlPlaneSelector:
+ names:
+ - controlplane-dev
+ - controlplane-staging
+ - controlplane-prod
+```
+
+### Manage sensitive data
+
+:::important
+Available from Spaces `v1.10`
+:::
+
+Store sensitive data in Kubernetes secrets and reference them in your
+`SharedTelemetryConfig`:
+
+1. **Create the secret:**
+ ```bash
+ kubectl create secret generic sensitive -n \
+ --from-literal=apiKey='YOUR_API_KEY'
+ ```
+
+2. **Reference in SharedTelemetryConfig:**
+ ```yaml
+ apiVersion: observability.spaces.upbound.io/v1alpha1
+ kind: SharedTelemetryConfig
+ metadata:
+ name: newrelic
+ spec:
+ configPatchSecretRefs:
+ - name: sensitive
+ key: apiKey
+ path: exporters.otlphttp.headers.api-key
+ controlPlaneSelector:
+ labelSelectors:
+ - matchLabels:
+ org: foo
+ exporters:
+ otlphttp:
+ endpoint: https://otlp.nr-data.net
+ headers:
+ api-key: dummy # Replaced by secret value
+ exportPipeline:
+ metrics: [otlphttp]
+ traces: [otlphttp]
+ logs: [otlphttp]
+ ```
+
+### Telemetry processing
+
+:::important
+Available from Spaces `v1.11`
+:::
+
+Configure processing pipelines to transform telemetry data using the [transform
+processor][transform-processor].
+
+#### Add labels to metrics
+
+```yaml
+spec:
+ processors:
+ transform:
+ error_mode: ignore
+ metric_statements:
+ - context: datapoint
+ statements:
+ - set(attributes["newLabel"], "someLabel")
+ processorPipeline:
+ metrics: [transform]
+```
+
+#### Remove labels
+
+From metrics:
+```yaml
+processors:
+ transform:
+ metric_statements:
+ - context: datapoint
+ statements:
+ - delete_key(attributes, "kubernetes_namespace")
+```
+
+From logs:
+```yaml
+processors:
+ transform:
+ log_statements:
+ - context: log
+ statements:
+ - delete_key(attributes, "log.file.name")
+```
+
+#### Modify log messages
+
+```yaml
+processors:
+ transform:
+ log_statements:
+ - context: log
+ statements:
+ - set(attributes["original"], body)
+ - set(body, Concat(["log message:", body], " "))
+```
+
+### Monitor status
+
+Check the status of your `SharedTelemetryConfig`:
+
+```bash
+kubectl get stc
+NAME SELECTED FAILED PROVISIONED AGE
+datadog 1 0 1 63s
+```
+
+- `SELECTED`: Number of control planes selected
+- `FAILED`: Number of control planes that failed provisioning
+- `PROVISIONED`: Number of successfully running collectors
+
+For detailed status information:
+
+```bash
+kubectl describe stc
+```
+
+## Supported exporters
+
+Both Space-level and control plane observability support:
+- `datadog` -. Datadog integration
+- `otlphttp` - General-purpose exporter (used by New Relic, among others)
+- `debug` -. troubleshooting
+
+## Considerations
+
+- **Control plane conflicts**: Each control plane can only use one `SharedTelemetryConfig`. Multiple configs selecting the same control plane conflict.
+- **Custom collector image**: Both Space-level and control plane observability use the same custom OpenTelemetry Collector image with supported exporters.
+- **Resource scope**: `SharedTelemetryConfig` resources are group-scoped, allowing different telemetry configurations per group.
+
+For more advanced configuration options, review the [Helm chart
+reference][helm-chart-reference] and [OpenTelemetry Transformation Language
+documentation][opentelemetry-transformation-language].
+
+
+[opentelemetry]: https://opentelemetry.io/
+[opentelemetry-collectors]: https://opentelemetry.io/docs/collector/
+[opentelemetry-collector-configuration]: https://opentelemetry.io/docs/collector/configuration/#exporters
+[opentelemetry-operator]: https://opentelemetry.io/docs/kubernetes/operator/
+[transform-processor]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/transformprocessor/README.md
+[opentelemetry-transformation-language]: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/pkg/ottl
+[space-level-o11y]: /spaces/howtos/self-hosted/space-observability
+[helm-chart-reference]: /reference/helm-reference
+[opentelemetry-transformation-language-functions]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/ottl/ottlfuncs/README.md
+[opentelemetry-transformation-language-contexts]: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/pkg/ottl/contexts
+[guide-on-ottl]: https://betterstack.com/community/guides/observability/ottl/#a-brief-overview-of-the-ottl-grammar
diff --git a/spaces_versioned_docs/version-1.14/howtos/query-api.md b/spaces_versioned_docs/version-1.14/howtos/query-api.md
new file mode 100644
index 000000000..c9703de55
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/query-api.md
@@ -0,0 +1,315 @@
+---
+title: Query API
+sidebar_position: 40
+description: Use the `up` CLI to query objects and resources
+---
+
+
+
+
+Upbound's Query API allows users to inspect objects and resources within their control planes. The read-only `up alpha query` and `up alpha get` CLI commands allow you to gather information on your control planes in a fast and efficient package. These commands follow the [`kubectl` conventions][kubectl-conventions] for filtering, sorting, and retrieving information from your Space.
+
+
+
+
+## Using the Query API
+
+
+The Query API allows you to retrieve control plane information faster than traditional `kubectl` commands. This feature lets you debug your Crossplane resources with the CLI or within the Upbound Console's enhanced management views.
+
+### Query within a single control plane
+
+Use the `up alpha get` command to retrieve information about objects within the current control plane context. This command uses the **Query** endpoint and targets the current control plane.
+
+To switch between control plane groups, use the [`up ctx` ][up-ctx] and change to your desired context with an interactive prompt or specify with your control plane path:
+
+```shell
+up ctx ///
+```
+
+You can query within a single control plane with the [`up alpha get` command][up-alpha-get-command] to return more information about a given object within the current kubeconfig context.
+
+The `up alpha get` command can query resource types and aliases to return objects in your control plane.
+
+```shell
+up alpha get managed
+NAME READY SYNCED AGE
+custom-account1-5bv5j-sa True True 15m
+custom-cluster1-bq6dk-net True True 15m
+custom-account1-5bv5j-subnet True True 15m
+custom-cluster1-bq6dk-nodepool True True 15m
+custom-cluster1-bq6dk-cluster True True 15m
+custom-account1-5bv5j-net True True 15m
+custom-cluster1-bq6dk-subnet True True 15m
+custom-cluster1-bq6dk-sa True True 15m
+```
+
+The [`-A` flag][a-flag] queries for objects across all namespaces.
+
+```shell
+up alpha get configmaps -A
+NAMESPACE NAME AGE
+crossplane-system uxp-versions-config 18m
+crossplane-system universal-crossplane-config 18m
+crossplane-system kube-root-ca.crt 18m
+upbound-system kube-root-ca.crt 18m
+kube-system kube-root-ca.crt 18m
+kube-system coredns 18m
+default kube-root-ca.crt 18m
+kube-node-lease kube-root-ca.crt 18m
+kube-public kube-root-ca.crt 18m
+kube-system kube-apiserver-legacy-service-account-token-tracking 18m
+kube-system extension-apiserver-authentication 18m
+```
+
+To query for [multiple resource types][multiple-resource-types], you can add the name or alias for the resource as a comma separated string.
+
+```shell
+up alpha get providers,providerrevisions
+
+NAME HEALTHY REVISION IMAGE STATE DEP-FOUND DEP-INSTALLED AGE
+providerrevision.pkg.crossplane.io/crossplane-contrib-provider-nop-ecc25c121431 True 1 xpkg.upbound.io/crossplane-contrib/provider-nop:v0.2.1 Active 18m
+NAME INSTALLED HEALTHY PACKAGE AGE
+provider.pkg.crossplane.io/crossplane-contrib-provider-nop True True xpkg.upbound.io/crossplane-contrib/provider-nop:v0.2.1 18m
+```
+
+### Query multiple control planes
+
+The [`up alpha query` command][up-alpha-query-command] returns a list of objects of any kind within all the control planes in your Space. This command uses either the **SpaceQuery** or **GroupQuery** endpoints depending on your query scope. The `-A` flag switches the query context from the group level to the entire Space
+
+The `up alpha query` command accepts resources and aliases to return objects across your group or Space.
+
+```shell
+up alpha query crossplane
+
+NAME ESTABLISHED OFFERED AGE
+compositeresourcedefinition.apiextensions.crossplane.io/xnetworks.platform.acme.co True True 20m
+compositeresourcedefinition.apiextensions.crossplane.io/xaccountscaffolds.platform.acme.co True True 20m
+
+
+NAME XR-KIND XR-APIVERSION AGE
+composition.apiextensions.crossplane.io/xaccountscaffolds.platform.acme.co XAccountScaffold platform.acme.co/v1alpha1 20m
+composition.apiextensions.crossplane.io/xnetworks.platform.acme.co XNetwork platform.acme.co/v1alpha1 20m
+
+
+NAME REVISION XR-KIND XR-APIVERSION AGE
+compositionrevision.apiextensions.crossplane.io/xaccountscaffolds.platform.acme.co-5ae9da5 1 XAccountScaffold platform.acme.co/v1alpha1 20m
+compositionrevision.apiextensions.crossplane.io/xnetworks.platform.acme.co-414ce80 1 XNetwork platform.acme.co/v1alpha1 20m
+
+NAME READY SYNCED AGE
+nopresource.nop.crossplane.io/custom-cluster1-bq6dk-subnet True True 19m
+nopresource.nop.crossplane.io/custom-account1-5bv5j-net True True 19m
+
+## Output truncated...
+
+```
+
+
+The [`--sort-by` flag][sort-by-flag] allows you to return information to your specifications. You can construct your sort order in a JSONPath expression string or integer.
+
+
+```shell
+up alpha query crossplane -A --sort-by="{.metadata.name}"
+
+CONTROLPLANE NAME AGE
+default/test deploymentruntimeconfig.pkg.crossplane.io/default 10m
+
+CONTROLPLANE NAME AGE TYPE DEFAULT-SCOPE
+default/test storeconfig.secrets.crossplane.io/default 10m Kubernetes crossplane-system
+```
+
+To query for multiple resource types, you can add the name or alias for the resource as a comma separated string.
+
+```shell
+up alpha query namespaces,configmaps -A
+
+CONTROLPLANE NAME AGE
+default/test namespace/upbound-system 15m
+default/test namespace/crossplane-system 15m
+default/test namespace/kube-system 16m
+default/test namespace/default 16m
+
+CONTROLPLANE NAMESPACE NAME AGE
+default/test crossplane-system configmap/uxp-versions-config 15m
+default/test crossplane-system configmap/universal-crossplane-config 15m
+default/test crossplane-system configmap/kube-root-ca.crt 15m
+default/test upbound-system configmap/kube-root-ca.crt 15m
+default/test kube-system configmap/coredns 16m
+default/test default configmap/kube-root-ca.crt 16m
+
+## Output truncated...
+
+```
+
+The Query API also allows you to return resource types with specific [label columns][label-columns].
+
+```shell
+up alpha query composite -A --label-columns=crossplane.io/claim-namespace
+
+CONTROLPLANE NAME SYNCED READY COMPOSITION AGE CLAIM-NAMESPACE
+query-api-test/test xeks.argo.discover.upbound.io/test-k7xbk False xeks.argo.discover.upbound.io 51d default
+
+CONTROLPLANE NAME EXTERNALDNS SYNCED READY COMPOSITION AGE CLAIM-NAMESPACE
+spaces-clusters/controlplane-query-api-test-spaces-playground xexternaldns.externaldns.platform.upbound.io/spaces-cluster-0-xd8v2-lhnl7 6.34.2 True True xexternaldns.externaldns.platform.upbound.io 19d default
+default/query-api-test xexternaldns.externaldns.platform.upbound.io/space-awg-kine-f7dxq-nkk2q 6.34.2 True True xexternaldns.externaldns.platform.upbound.io 55d default
+
+## Output truncated...
+
+```
+
+### Query API request format
+
+The CLI can also return a version of your query request with the [`--debug` flag][debug-flag]. This flag returns the API spec request for your query.
+
+```shell
+up alpha query composite -A -d
+
+apiVersion: query.spaces.upbound.io/v1alpha1
+kind: SpaceQuery
+metadata:
+ creationTimestamp: null
+spec:
+ cursor: true
+ filter:
+ categories:
+ - composite
+ controlPlane: {}
+ limit: 500
+ objects:
+ controlPlane: true
+ table: {}
+ page: {}
+```
+
+For more complex queries, you can interact with the Query API like a Kubernetes-style API by creating a query and applying it with `kubectl`.
+
+The example below is a query for `claim` resources in every control plane from oldest to newest and returns specific information about those claims.
+
+
+```yaml
+apiVersion: query.spaces.upbound.io/v1alpha1
+kind: SpaceQuery
+spec:
+ filter:
+ categories:
+ - claim
+ order:
+ - creationTimestamp: Asc
+ cursor: true
+ count: true
+ objects:
+ id: true
+ controlPlane: true
+ object:
+ kind: true
+ apiVersion: true
+ metadata:
+ name: true
+ uid: true
+ spec:
+ containers:
+ image: true
+```
+
+
+The Query API is served by the Spaces API endpoint. You can use `up ctx` to
+switch the kubectl context to the Spaces API ingress. After that, you can use
+`kubectl create` and receive the `response` for your query parameters.
+
+
+```shell
+kubectl create -f spaces-query.yaml -o yaml
+```
+
+Your `response` should look similar to this example:
+
+```yaml {copy-lines="none"}
+apiVersion: query.spaces.upbound.io/v1alpha1
+kind: SpaceQuery
+metadata:
+ creationTimestamp: "2024-08-08T14:41:46Z"
+ name: default
+response:
+ count: 3
+ cursor:
+ next: ""
+ page: 0
+ pageSize: 100
+ position: 0
+ objects:
+ - controlPlane:
+ name: query-api-test
+ namespace: default
+ id: default/query-api-test/823b2781-7e70-4d91-a6f0-ee8f455d67dc
+ object:
+ apiVersion: spaces.platform.upbound.io/v1alpha1
+ kind: Space
+ metadata:
+ name: space-awg-kine
+ resourceVersion: "803868"
+ uid: 823b2781-7e70-4d91-a6f0-ee8f455d67dc
+ spec: {}
+ - controlPlane:
+ name: test-1
+ namespace: test
+ id: test/test-1/08a573dd-851a-42cc-a600-b6f6ed37ee8d
+ object:
+ apiVersion: argo.discover.upbound.io/v1alpha1
+ kind: EKS
+ metadata:
+ name: test-1
+ resourceVersion: "4270320"
+ uid: 08a573dd-851a-42cc-a600-b6f6ed37ee8d
+ spec: {}
+ - controlPlane:
+ name: controlplane-query-api-test-spaces-playground
+ namespace: spaces-clusters
+ id: spaces-clusters/controlplane-query-api-test-spaces-playground/b5a6770f-1f85-4d09-8990-997c84bd4159
+ object:
+ apiVersion: spaces.platform.upbound.io/v1alpha1
+ kind: Space
+ metadata:
+ name: spaces-cluster-0
+ resourceVersion: "1408337"
+ uid: b5a6770f-1f85-4d09-8990-997c84bd4159
+ spec: {}
+```
+
+
+## Query API Explorer
+
+
+
+import CrdDocViewer from '@site/src/components/CrdViewer';
+
+### Query
+
+The Query resource allows you to query objects in a single control plane.
+
+
+
+### GroupQuery
+
+The GroupQuery resource allows you to query objects across a group of control planes.
+
+
+
+### SpaceQuery
+
+The SpaceQuery resource allows you to query objects across all control planes in a space.
+
+
+
+
+
+
+[documentation]: /spaces/howtos/self-hosted/query-api
+[up-ctx]: /reference/cli-reference
+[up-alpha-get-command]: /reference/cli-reference
+[a-flag]: /reference/cli-reference
+[multiple-resource-types]: /reference/cli-reference
+[up-alpha-query-command]: /reference/cli-reference
+[sort-by-flag]: /reference/cli-reference
+[label-columns]: /reference/cli-reference
+[debug-flag]: /reference/cli-reference
+[kubectl-conventions]: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/
diff --git a/spaces_versioned_docs/version-1.14/howtos/secrets-management.md b/spaces_versioned_docs/version-1.14/howtos/secrets-management.md
new file mode 100644
index 000000000..b901a7dad
--- /dev/null
+++ b/spaces_versioned_docs/version-1.14/howtos/secrets-management.md
@@ -0,0 +1,714 @@
+---
+title: Secrets Management
+sidebar_position: 20
+description: A guide for how to configure synchronizing external secrets into control
+ planes in a Space.
+---
+
+Upbound's _Shared Secrets_ is a built in secrets management feature that
+provides an integrated way to manage secrets across your platform. It allows you
+to store sensitive data like passwords and certificates for your managed control
+planes as secrets in an external secret store.
+
+This feature is a wrapper around the [External Secrets Operator (ESO)][external-secrets-operator-eso] that pulls secrets from external vaults and distributes them across your platform.
+
+
+## Benefits
+
+The Shared Secrets feature allows you to:
+
+* Access secrets from a variety of external secret stores without operation overhead
+* Configure synchronization for multiple control planes in a group
+* Store and manage all your secrets centrally
+* Use Shared Secrets across all Upbound environments(Cloud and Disconnected Spaces)
+* Synchronize secrets across groups of control planes while maintaining clear security boundaries
+* Manage secrets at scale programmatically while ensuring proper isolation and access control
+
+## Understanding the Architecture
+
+The Shared Secrets feature uses a hierarchical approach to centrally manage
+secrets and effectively control their distribution.
+
+
+
+1. The flow begins at the group level, where you define your secret sources and distribution rules
+2. These rules automatically create corresponding resources in your control planes
+3. In each control plane, specific namespaces receive the secrets
+4. Changes at the group level automatically propagate through this chain
+
+## Component configuration
+
+Upbound Shared Secrets consists of two components:
+
+1. **SharedSecretStore**: Defines connections to external secret providers
+2. **SharedExternalSecret**: Specifies which secrets to synchronize and where
+
+
+### Connect to an External Vault
+
+
+The `SharedSecretStore` component is the connection point to your external
+secret vaults. It provisions ClusterSecretStore resources into control planes
+within the group.
+
+
+#### AWS Secrets Manager
+
+
+
+In this example, you'll create a `SharedSecretStore` to connect to AWS
+Secrets Manager in `us-west-2`. Then apply access to all control planes labeled with
+`environment: production`, and make these secrets available in the `default` and
+`crossplane-system` namespaces.
+
+
+You can configure access to AWS Secrets Manager using static credentials or
+workload identity.
+
+:::important
+While the underlying ESO API supports more auth methods, static credentials are currently the only supported auth method in Cloud Spaces.
+:::
+
+##### Static credentials
+
+1. Use the AWS CLI to create access credentials.
+
+
+2. Create your access credentials.
+```ini
+# Create a text file with AWS credentials
+cat > aws-credentials.txt << EOF
+[default]
+aws_access_key_id =
+aws_secret_access_key =
+EOF
+```
+
+3. Next,store the access credentials in a secret in the namespace you want to have access to the `SharedSecretStore`.
+```shell
+kubectl create secret \
+ generic aws-credentials \
+ -n default \
+ --from-file=creds=./aws-credentials.txt
+```
+
+4. Create a `SharedSecretStore` custom resource file called `secretstore.yaml`.
+ Paste the following configuration:
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedSecretStore
+metadata:
+ name: aws-secrets
+spec:
+ # Define which control planes should receive this configuration
+ controlPlaneSelector:
+ labelSelectors:
+ - matchLabels:
+ environment: production
+
+ # Define which namespaces within those control planes can access secrets
+ namespaceSelector:
+ names:
+ - default
+ - crossplane-system
+
+ # Configure the connection to AWS Secrets Manager
+ provider:
+ aws:
+ service: SecretsManager
+ region: us-west-2
+ auth:
+ secretRef:
+ accessKeyIDSecretRef:
+ name: aws-credentials
+ key: access-key-id
+ secretAccessKeySecretRef:
+ name: aws-credentials
+ key: secret-access-key
+```
+
+
+
+##### Workload Identity with IRSA
+
+
+
+You can also use AWS IAM Roles for Service Accounts (IRSA) depending on your
+organizations needs:
+
+1. Ensure you have deployed the Spaces software into an IRSA-enabled EKS cluster.
+2. Follow the AWS instructions to create an IAM OIDC provider with your EKS OIDC
+ provider URL.
+3. Determine the Spaces-generated `controlPlaneID` of your control plane:
+```shell
+kubectl get controlplane -o jsonpath='{.status.controlPlaneID}'
+```
+
+4. Create an IAM trust policy in your AWS account to match the control plane.
+```yaml
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "Federated": "arn:aws:iam:::oidc-provider/"
+ },
+ "Action": "sts:AssumeRoleWithWebIdentity",
+ "Condition": {
+ "StringEquals": {
+ ":aud": "sts.amazonaws.com",
+ ":sub": [
+"system:serviceaccount:mxp--system:external-secrets-controller"]
+ }
+ }
+ }
+ ]
+}
+```
+
+5. Update your Spaces deployment to annotate the SharedSecrets service account
+ with the role ARN.
+```shell
+up space upgrade ... \
+ --set controlPlanes.sharedSecrets.serviceAccount.customAnnotations."eks\.amazonaws\.com/role-arn"=""
+```
+
+6. Create a SharedSecretStore and reference the SharedSecrets service account:
+```ini {copy-lines="all"}
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedSecretStore
+metadata:
+ name: aws-sm
+ namespace: default
+spec:
+ provider:
+ aws:
+ service: SecretsManager
+ region:
+ auth:
+ jwt:
+ serviceAccountRef:
+ name: external-secrets-controller
+ controlPlaneSelector:
+ names:
+ -
+ namespaceSelector:
+ names:
+ - default
+```
+
+When you create a `SharedSecretStore` the underlying mechanism:
+
+1. Applies at the group level
+2. Determines which control planes should receive this configuration by the `controlPlaneSelector`
+3. Automatically creates a ClusterSecretStore inside each identified control plane
+4. Maintains a connection in each control plane with the ClusterSecretStore
+ credentials and configuration from the parent SharedSecretStore
+
+Upbound automatically generates a ClusterSecretStore in each matching control
+plane when you create a SharedSecretStore.
+
+```yaml {copy-lines="none"}
+# Automatically created in each matching control plane
+apiVersion: external-secrets.io/v1beta1
+kind: ClusterSecretStore
+metadata:
+ name: aws-secrets # Name matches the parent SharedSecretStore
+spec:
+ provider:
+ upboundspaces:
+ storeRef:
+ name: aws-secret
+```
+
+When you create the SharedSecretStore controller, it replaces the provider with
+a special provider called `upboundspaces`. This provider references the
+SharedSecretStore object in the Spaces API. This avoids copying the actual cloud
+credentials from Spaces to each control plane.
+
+This workflow allows you to configure the store connection only once at the
+group level and automatically propagates to each control plane. Individual control
+planes can use the store without exposure to the group-level configuration and
+updates all child ClusterSecretStores when updated.
+
+
+#### Azure Key Vault
+
+
+:::important
+While the underlying ESO API supports more auth methods, static credentials are currently the only supported auth method in Cloud Spaces.
+:::
+
+##### Static credentials
+
+1. Use the Azure CLI to create a service principal and authentication file.
+2. Create a service principal and save credentials in a file:
+```json
+{
+ "appId": "myAppId",
+ "displayName": "myServicePrincipalName",
+ "password": "myServicePrincipalPassword",
+ "tenant": "myTentantId"
+}
+```
+
+3. Store the credentials as a Kubernetes secret:
+```shell
+kubectl create secret \
+ generic azure-secret-sp \
+ -n default \
+ --from-file=creds=./azure-credentials.json
+```
+
+4. Create a SharedSecretStore referencing these credentials:
+```yaml
+apiVersion: spaces.upbound.io/v1alpha1
+kind: SharedSecretStore
+metadata:
+ name: azure-kv
+spec:
+ provider:
+ azurekv:
+ tenantId: ""
+ vaultUrl: ""
+ authSecretRef:
+ clientId:
+ name: azure-secret-sp
+ key: ClientID
+ clientSecret:
+ name: azure-secret-sp
+ key: ClientSecret
+ controlPlaneSelector:
+ names:
+ -
+ namespaceSelector:
+ names:
+ - default
+```
+
+##### Workload Identity
+
+
+You can also use Entra Workload Identity Federation to access Azure Key Vault
+without needing to manage secrets.
+
+To use Entra Workload ID with AKS:
+
+
+1. Deploy the Spaces software into a [workload identity-enabled AKS cluster][workload-identity-enabled-aks-cluster].
+2. Retrieve the OIDC issuer URL of the AKS cluster:
+```ini
+az aks show --name "" \
+ --resource-group "" \
+ --query "oidcIssuerProfile.issuerUrl" \
+ --output tsv
+```
+
+3. Use the Azure CLI to make a managed identity:
+```ini
+az identity create \
+ --name "" \
+ --resource-group "" \
+ --location "" \
+ --subscription ""
+```
+
+4. Look up the managed identity's client ID:
+```ini
+az identity show \
+ --resource-group "" \
+ --name "" \
+ --query 'clientId' \
+ --output tsv
+```
+
+5. Update your Spaces deployment to annotate the SharedSecrets service account with the associated Entra application client ID from the previous step:
+```ini
+up space upgrade ... \
+ --set controlPlanes.sharedSecrets.serviceAccount.customAnnotations."azure\.workload\.identity/client-id"="