Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Dec 29, 2025

Implements Ansible role to deploy nfs-subdir-external-provisioner via ArgoCD using a static, non-templated Application manifest. Manages StorageClasses separately from the Helm chart for explicit version control.

Files Added

ArgoCD Application (argocd/nfs_provisioner/nfs_provisioner.yml)

  • Helm chart nfs-subdir-external-provisioner:4.0.18
  • NFS server 192.168.226.6:/volume2/k8s
  • Chart StorageClass creation disabled (storageClass.create: false)
  • Provisioner name: cluster.local/nfs-subdir-external-provisioner

StorageClasses (k8s/storageclasses/)

  • nfs-synology-delete.yml - Dynamic, Delete reclaim
  • nfs-synology-retain.yml - Dynamic, Retain reclaim
  • nfs-static-retain.yml - Static PVs, kubernetes.io/no-provisioner

Ansible Role (ansible/roles/deploy_nfs_provisioner/)

  • Applies ArgoCD Application manifest via kubernetes.core.k8s
  • Polls for Application Synced/Healthy (30 retries × 10s)
  • Applies and validates all three StorageClasses
  • Follows repo conventions: variable prefixing, artifact persistence, two-step file validation

Key Design Decisions

All Helm values in spec.source.helm.valuesObject - no external values files. StorageClasses committed as static YAMLs for auditability and independent lifecycle management from the provisioner deployment.

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • galaxy.ansible.com
    • Triggering command: /opt/pipx_bin/ansible-galaxy ansible-galaxy collection install -r ansible/requirements.yml (dns block)
    • Triggering command: /home/REDACTED/.local/bin/ansible-galaxy ansible-galaxy collection install kubernetes.core --force (dns block)
  • https://api.github.com/repos/ansible/ansible-lint/releases/latest
    • Triggering command: /home/REDACTED/.local/bin/ansible-lint ansible-lint ansible/roles/deploy_nfs_provisioner/ (http block)

If you need me to access, download, or install something from one of these locations, you can either:

Original prompt

This section details on the original issue you should resolve

<issue_title>deploy_nfs_provisioner role + static ArgoCD Application YAML</issue_title>
<issue_description>## Goal

Implement an Ansible role deploy_nfs_provisioner that deploys nfs-subdir-external-provisioner via ArgoCD using a static Application manifest checked into the repo at:

  • argocd/nfs_provisioner/nfs_provisioner.yml

All Helm overrides must be expressed in:

  • spec.source.helm.valuesObject

The role then ensures the required StorageClasses exist, including a static no-provisioner class for handcrafted PVs.

Non-Goals

  • No PVC/PV migrations in this PR.
  • No app changes (Paperless/Crafty) in this PR.
  • Don’t remove Synology CSI in this PR.

Repo Changes

1) Add the ArgoCD Application manifest (static file)

Create:

  • argocd/nfs_provisioner/nfs_provisioner.yml

This file must be a complete ArgoCD Application and must match your existing “canonical” Application schema (project, destination, syncPolicy, namespace patterns).

Key requirements inside the Application:

  • Deploy Helm chart for nfs-subdir-external-provisioner

  • Pin chart version (no floating latest)

  • Target namespace (create via sync option or separate namespace manifest—follow your existing convention)

  • Set a fixed provisioner name via valuesObject (example: cluster.local/nfs-subdir-external-provisioner)

  • Configure the NFS server/path:

    • server: 192.168.226.6
    • path: /volume2/k8s
  • Prefer chart config that does not create StorageClasses (we will manage SCs ourselves as repo YAML)

    • If chart requires creating one, make it benign and consistent; but first try to disable SC creation via chart values, per upstream docs.

All Helm overrides go here:

spec:
  source:
    helm:
      valuesObject:
        ...

2) Add StorageClass manifests (static files, no templates)

Create these files:

  • k8s/storageclasses/nfs-synology-delete.yml
  • k8s/storageclasses/nfs-synology-retain.yml
  • k8s/storageclasses/nfs-static-retain.yml

Dynamic SCs (must match the provisionerName in the chart valuesObject):

  • nfs-synology-delete

    • provisioner: cluster.local/nfs-subdir-external-provisioner
    • reclaimPolicy: Delete
    • volumeBindingMode: Immediate
    • allowVolumeExpansion: true
  • nfs-synology-retain

    • same provisioner
    • reclaimPolicy: Retain

Static SC:

  • nfs-static-retain

    • provisioner: kubernetes.io/no-provisioner
    • reclaimPolicy: Retain
    • volumeBindingMode: Immediate
    • allowVolumeExpansion: false

No kustomization required unless your repo style demands it; the Ansible role will apply these YAMLs directly.


Ansible Role: deploy_nfs_provisioner

Location / structure

Create:

  • ansible/roles/deploy_nfs_provisioner/

    • meta/main.yml
    • meta/argument_specs.yml
    • defaults/main.yml
    • tasks/main.yml

Role variables (role-prefixed)

Required:

  • deploy_nfs_provisioner_kubeconfig
  • deploy_nfs_provisioner_context
  • deploy_nfs_provisioner_argocd_namespace (default argocd)

Paths (repo-relative, default to the new files):

  • deploy_nfs_provisioner_argocd_application_path (default argocd/nfs_provisioner/nfs_provisioner.yml)
  • deploy_nfs_provisioner_storageclass_paths (default list of the 3 SC YAML files)

Optional:

  • deploy_nfs_provisioner_wait_timeout_seconds (sane default)
  • deploy_nfs_provisioner_wait_retries / deploy_nfs_provisioner_wait_delay

Tasks: required behavior

1) Assertions

  • Assert kubeconfig/context present

  • Assert the repo YAML files exist on disk where the playbook runs (use ansible.builtin.stat)

    • If your Ansible runs in CI, these files will exist in the checked-out workspace.

2) Apply ArgoCD Application YAML (as-is)

Use kubernetes.core.k8s with src: to apply the file:

  • kubernetes.core.k8s: kubeconfig: ... context: ... state: present src: "{{ deploy_nfs_provisioner_argocd_application_path }}"

No templating. No Jinja.

3) Wait for ArgoCD Application Healthy/Synced

Use kubernetes.core.k8s_info polling on:

  • kind: Application
  • namespace: {{ deploy_nfs_provisioner_argocd_namespace }}
  • name: whatever metadata.name is in the YAML (hardcode in defaults as deploy_nfs_provisioner_argocd_app_name and keep it consistent with the file)

until conditions:

  • resource exists
  • .status.sync.status == "Synced"
  • .status.health.status == "Healthy"

Use defensive guards (is defined, | default(...)) to avoid crashes.

4) Apply StorageClasses YAMLs (as-is)

Loop over deploy_nfs_provisioner_storageclass_paths and apply via kubernetes.core.k8s src:.

5) Validate StorageClasses exist

Use kubernetes.core.k8s_info for StorageClass and assert presence of:

  • nfs-synology-delete
  • nfs-synology-retain
  • nfs-static-retain

ArgoCD Application manifest content requirements (what Copilot must implement)

Inside ...


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

Copilot AI and others added 6 commits December 29, 2025 03:46
Co-authored-by: SRF-Audio <16975040+SRF-Audio@users.noreply.github.com>
Co-authored-by: SRF-Audio <16975040+SRF-Audio@users.noreply.github.com>
Co-authored-by: SRF-Audio <16975040+SRF-Audio@users.noreply.github.com>
Co-authored-by: SRF-Audio <16975040+SRF-Audio@users.noreply.github.com>
Co-authored-by: SRF-Audio <16975040+SRF-Audio@users.noreply.github.com>
Co-authored-by: SRF-Audio <16975040+SRF-Audio@users.noreply.github.com>
Copilot AI changed the title [WIP] Implement deploy_nfs_provisioner role with static ArgoCD Application YAML Add deploy_nfs_provisioner role with static ArgoCD Application Dec 29, 2025
Copilot AI requested a review from SRF-Audio December 29, 2025 04:00
@SRF-Audio SRF-Audio marked this pull request as ready for review December 29, 2025 21:06
@SRF-Audio SRF-Audio merged commit 160bb78 into main Dec 29, 2025
3 of 12 checks passed
@SRF-Audio SRF-Audio deleted the copilot/deploy-nfs-provisioner-role branch December 29, 2025 21:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

deploy_nfs_provisioner role + static ArgoCD Application YAML

2 participants