Add deploy_nfs_provisioner role with static ArgoCD Application #141
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Implements Ansible role to deploy nfs-subdir-external-provisioner via ArgoCD using a static, non-templated Application manifest. Manages StorageClasses separately from the Helm chart for explicit version control.
Files Added
ArgoCD Application (
argocd/nfs_provisioner/nfs_provisioner.yml)nfs-subdir-external-provisioner:4.0.18192.168.226.6:/volume2/k8sstorageClass.create: false)cluster.local/nfs-subdir-external-provisionerStorageClasses (
k8s/storageclasses/)nfs-synology-delete.yml- Dynamic, Delete reclaimnfs-synology-retain.yml- Dynamic, Retain reclaimnfs-static-retain.yml- Static PVs,kubernetes.io/no-provisionerAnsible Role (
ansible/roles/deploy_nfs_provisioner/)kubernetes.core.k8sKey Design Decisions
All Helm values in
spec.source.helm.valuesObject- no external values files. StorageClasses committed as static YAMLs for auditability and independent lifecycle management from the provisioner deployment.Warning
Firewall rules blocked me from connecting to one or more addresses (expand for details)
I tried to connect to the following addresses, but was blocked by firewall rules:
galaxy.ansible.com/opt/pipx_bin/ansible-galaxy ansible-galaxy collection install -r ansible/requirements.yml(dns block)/home/REDACTED/.local/bin/ansible-galaxy ansible-galaxy collection install kubernetes.core --force(dns block)https://api.github.com/repos/ansible/ansible-lint/releases/latest/home/REDACTED/.local/bin/ansible-lint ansible-lint ansible/roles/deploy_nfs_provisioner/(http block)If you need me to access, download, or install something from one of these locations, you can either:
Original prompt
This section details on the original issue you should resolve
<issue_title>deploy_nfs_provisioner role + static ArgoCD Application YAML</issue_title>
<issue_description>## Goal
Implement an Ansible role
deploy_nfs_provisionerthat deploys nfs-subdir-external-provisioner via ArgoCD using a static Application manifest checked into the repo at:argocd/nfs_provisioner/nfs_provisioner.ymlAll Helm overrides must be expressed in:
spec.source.helm.valuesObjectThe role then ensures the required StorageClasses exist, including a static no-provisioner class for handcrafted PVs.
Non-Goals
Repo Changes
1) Add the ArgoCD Application manifest (static file)
Create:
argocd/nfs_provisioner/nfs_provisioner.ymlThis file must be a complete ArgoCD
Applicationand must match your existing “canonical” Application schema (project, destination, syncPolicy, namespace patterns).Key requirements inside the Application:
Deploy Helm chart for
nfs-subdir-external-provisionerPin chart version (no floating latest)
Target namespace (create via sync option or separate namespace manifest—follow your existing convention)
Set a fixed provisioner name via valuesObject (example:
cluster.local/nfs-subdir-external-provisioner)Configure the NFS server/path:
192.168.226.6/volume2/k8sPrefer chart config that does not create StorageClasses (we will manage SCs ourselves as repo YAML)
All Helm overrides go here:
2) Add StorageClass manifests (static files, no templates)
Create these files:
k8s/storageclasses/nfs-synology-delete.ymlk8s/storageclasses/nfs-synology-retain.ymlk8s/storageclasses/nfs-static-retain.ymlDynamic SCs (must match the provisionerName in the chart valuesObject):
nfs-synology-deleteprovisioner: cluster.local/nfs-subdir-external-provisionerreclaimPolicy: DeletevolumeBindingMode: ImmediateallowVolumeExpansion: truenfs-synology-retainreclaimPolicy: RetainStatic SC:
nfs-static-retainprovisioner: kubernetes.io/no-provisionerreclaimPolicy: RetainvolumeBindingMode: ImmediateallowVolumeExpansion: falseNo kustomization required unless your repo style demands it; the Ansible role will apply these YAMLs directly.
Ansible Role:
deploy_nfs_provisionerLocation / structure
Create:
ansible/roles/deploy_nfs_provisioner/meta/main.ymlmeta/argument_specs.ymldefaults/main.ymltasks/main.ymlRole variables (role-prefixed)
Required:
deploy_nfs_provisioner_kubeconfigdeploy_nfs_provisioner_contextdeploy_nfs_provisioner_argocd_namespace(defaultargocd)Paths (repo-relative, default to the new files):
deploy_nfs_provisioner_argocd_application_path(defaultargocd/nfs_provisioner/nfs_provisioner.yml)deploy_nfs_provisioner_storageclass_paths(default list of the 3 SC YAML files)Optional:
deploy_nfs_provisioner_wait_timeout_seconds(sane default)deploy_nfs_provisioner_wait_retries/deploy_nfs_provisioner_wait_delayTasks: required behavior
1) Assertions
Assert kubeconfig/context present
Assert the repo YAML files exist on disk where the playbook runs (use
ansible.builtin.stat)2) Apply ArgoCD Application YAML (as-is)
Use
kubernetes.core.k8swithsrc:to apply the file:kubernetes.core.k8s: kubeconfig: ... context: ... state: present src: "{{ deploy_nfs_provisioner_argocd_application_path }}"No templating. No Jinja.
3) Wait for ArgoCD Application Healthy/Synced
Use
kubernetes.core.k8s_infopolling on:Application{{ deploy_nfs_provisioner_argocd_namespace }}metadata.nameis in the YAML (hardcode in defaults asdeploy_nfs_provisioner_argocd_app_nameand keep it consistent with the file)untilconditions:.status.sync.status == "Synced".status.health.status == "Healthy"Use defensive guards (
is defined,| default(...)) to avoid crashes.4) Apply StorageClasses YAMLs (as-is)
Loop over
deploy_nfs_provisioner_storageclass_pathsand apply viakubernetes.core.k8ssrc:.5) Validate StorageClasses exist
Use
kubernetes.core.k8s_infoforStorageClassand assert presence of:nfs-synology-deletenfs-synology-retainnfs-static-retainArgoCD Application manifest content requirements (what Copilot must implement)
Inside ...
💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.