diff --git a/Mastering-Bash/Bash-Exit-Status.md b/Mastering-Bash/Bash-Exit-Status.md index 5f49233..18bb438 100644 --- a/Mastering-Bash/Bash-Exit-Status.md +++ b/Mastering-Bash/Bash-Exit-Status.md @@ -79,5 +79,6 @@ In this lab you will: exit 0 fi - exit 1 + # Exit status 3 will be returned if no previous condition was met. + exit 3 ``` diff --git a/Mastering-Bash/Bash-Expansions.md b/Mastering-Bash/Bash-Expansions.md index 50a2533..c0f0e5c 100644 --- a/Mastering-Bash/Bash-Expansions.md +++ b/Mastering-Bash/Bash-Expansions.md @@ -34,11 +34,11 @@ In this lab you will: leave the default value for the second run: ```console - $ ./default_answer pizza + $ ./default_answer.sh pizza The answer to life, the universe and everything is pizza Name of the script is ./default_answer - $ ./default_answer + $ ./default_answer.sh The answer to life, the universe and everything is 42 Name of the script is ./default_answer ``` diff --git a/Mastering-Bash/Bash-Outputs.md b/Mastering-Bash/Bash-Outputs.md index dae4a30..0c1c30f 100644 --- a/Mastering-Bash/Bash-Outputs.md +++ b/Mastering-Bash/Bash-Outputs.md @@ -11,7 +11,7 @@ In this lab you will: 5. Append to the previously created file all the lines that contain a `:9` from `/etc/group` file. 6. Sort the content of `results.txt` by name. -7. Use the `less` pager to view the content of `/var/log/boot.log` and invoke an +7. Use the `less` pager to view the content of `/etc/vimrc` and invoke an editor to edit the file. ## Solution @@ -85,12 +85,14 @@ In this lab you will: unbound:x:994: ``` -7. Use the less pager to view the content of `/var/log/boot.log` and invoke an +7. Use the less pager to view the content of `/etc/vimrc` and invoke an editor to edit the file: ```console - $ less /var/log/boot.log + $ less /etc/vimrc (less interface opens) ``` - and then press `v`. + and then press `v`. You will see the `less` interface turning into a `vim` + one, but without the opportunity to make any change, since the file is not + owned by the `kirater` unprivileged user. diff --git a/Mastering-Bash/Bash-Signals-Kill.md b/Mastering-Bash/Bash-Signals-Kill.md index b7195c0..aac6564 100644 --- a/Mastering-Bash/Bash-Signals-Kill.md +++ b/Mastering-Bash/Bash-Signals-Kill.md @@ -18,7 +18,7 @@ In this lab you will: 1. Launch the sleep command and then press `CTLR+Z`. Result should be: ```console - [kirater@machine ~] $ sleep 100 + $ sleep 100 ^Z [1]+ Stopped sleep 100 ``` diff --git a/Workshops/Kubernetes-Security/README.md b/Workshops/Kubernetes-Security/README.md new file mode 100644 index 0000000..4bd8013 --- /dev/null +++ b/Workshops/Kubernetes-Security/README.md @@ -0,0 +1,39 @@ +# Kubernetes Security Workshop + +## Environment architecture + +The overall architecture of this workshop project is based upon a single +Minikube cluster installation. + +Everything is meant to be created on a physical machine or a virtual one +with, *at least*, 2 CPU and 4 Gigabytes of RAM. 4 CPU and 8 Gigabytes of RAM +will be ideal. + +Software requirements for the main machine are essentially just the Docker +service, everything else will be covered in the various stages. + +The outputs reported in the various stages were taken from a [AlmaLinux 9](https://repo.almalinux.org/almalinux/9/cloud/x86_64/images/AlmaLinux-9-GenericCloud-latest.x86_64.qcow2) +virtual machine with 4 CPUs and 8 Gigabytes of RAM. + +## Workshop structure + +The structure of the workshop will be based on stages: + +- Stage 0: [Install Minikube](../../Common/Kubernetes-Install-Minikube.md) +- Stage 1: [Network Policies](Stage-1-Network-Policies.md). +- Stage 2: [Kyverno, Policy as Code](Stage-2-Kyverno-Policy-as-Code.md). +- Stage 3: [Cosign, Sign Container Images](Stage-3-Sign-Containers-with-Cosign.md). +- Stage 4: [Policy Reporter UI](Stage-4-Policy-Reporter-Visualization.md). + +## References + +There are several technologies covered in this workshop, the main ones are +listed here: + +- [Kubernetes](https://kubernetes.io/), the container orchestration platform. +- [Kyverno](https://kyverno.io/), declarative Policy as Code for Kubernetes. +- [Cosign](https://github.com/sigstore/cosign), OCI containers signature tool. + +## Author + +Raoul Scarazzini ([raoul.scarazzini@kiratech.it](mailto:raoul.scarazzini@kiratech.it)) diff --git a/Workshops/Kubernetes-Security/Stage-1-Network-Policies.md b/Workshops/Kubernetes-Security/Stage-1-Network-Policies.md new file mode 100644 index 0000000..0d60dc2 --- /dev/null +++ b/Workshops/Kubernetes-Security/Stage-1-Network-Policies.md @@ -0,0 +1,186 @@ +# Kubernetes Network Policies + +Network Policies work in Kubernetes at application level and help you to control +traffic flow at the IP address or port level for TCP, UDP, and SCTP protocols. + +Check [the official Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/#network-traffic-filtering) +to catch all the details about these components. + +## Requisites + +Network Policies must be supported by the network plugin. In Minikube, by +default, there is no such kind of support, and so Minikube should be started by +using a different network plugin, like Calico: + +```console +$ minikube stop && minikube delete && minikube start --cni=calico +* Stopping node "minikube" ... +* Powering off "minikube" via SSH ... +* 1 node stopped. +* Deleting "minikube" in docker ... +* Deleting container "minikube" ... +* Removing /home/kirater/.minikube/machines/minikube ... +* Removed all traces of the "minikube" cluster. +* minikube v1.37.0 on Almalinux 9.4 (kvm/amd64) +* Automatically selected the docker driver. Other choices: none, ssh +* Using Docker driver with root privileges +* Starting "minikube" primary control-plane node in "minikube" cluster +* Pulling base image v0.0.48 ... +* Creating docker container (CPUs=2, Memory=3900MB) ... +* Preparing Kubernetes v1.34.0 on Docker 28.4.0 ... +* Configuring Calico (Container Networking Interface) ... +* Verifying Kubernetes components... + - Using image gcr.io/k8s-minikube/storage-provisioner:v5 +* Enabled addons: default-storageclass, storage-provisioner +* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default +``` + +## Network Policies with namespaces + +Given a situation where you have these three namespaces: + +kubectl delete namespace backend frontend external + +```console +$ kubectl create namespace backend +namespace "backend" created + +$ kubectl create namespace frontend +namespace "frontend" created + +$ kubectl create namespace external +namespace "external" created +``` + +And inside the `backend` namespace you have a deployment with a listening +service, in this case `nginx`: + +```console +$ kubectl --namespace backend create deployment backend --image nginx:latest +deployment backend created + +$ kubectl wait --namespace backend --for=condition=ready pod --selector=app=backend --timeout=90s +pod/backend-86565945bf-xqxdr condition met +``` + +By creating two other applications on the other two namespaces you can simulate +connections coming from different places: + +```console +$ kubectl -n frontend run frontend --image=curlimages/curl:latest --restart=Never -- /bin/sh -c "while true; do sleep 3600; done" +pod/frontend created + +$ kubectl -n external run external --image=curlimages/curl:latest --restart=Never -- /bin/sh -c "while true; do sleep 3600; done" +pod/external created +``` + +Once you get the backend POD IP address and the name of the two client PODs: + +```console +$ BACKENDIP=$(kubectl -n backend get pod -l app=backend -o jsonpath="{.items[0].status.podIP}") +(no output) + +$ FRONTENDPOD=$(kubectl -n frontend get pod -l run=frontend -o jsonpath='{.items[0].metadata.name}') +(no output) + +$ EXTERNALPOD=$(kubectl -n external get pod -l run=external -o jsonpath='{.items[0].metadata.name}') +(no output) +``` + +You can check what is the behavior without a Network Policy, so that the +connections should work for both clients: + +```console +$ kubectl -n frontend exec -it $FRONTENDPOD -- curl -s --connect-timeout 5 $BACKENDIP > /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE +REACHABLE + +$ kubectl -n external exec -it $EXTERNALPOD -- curl -s --connect-timeout 5 $BACKENDIP > /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE +REACHABLE +``` + +Then, to test Network Policies, each namespace should get a label, as follows: + +```console +$ kubectl label namespace backend name=backend +namespace/backend labeled + +$ kubectl label namespace frontend name=frontend +namespace/frontend labeled + +$ kubectl label namespace external name=external +namespace/external labeled +``` + +And then it will be possible to create a Network Policy that will allow just the +connections coming from the `frontend` labeled namespace: + +```console +$ kubectl create -f - < /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE +REACHABLE + +$ kubectl -n external exec -it $EXTERNALPOD -- curl -s --connect-timeout 5 $BACKENDIP > /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE +UNREACHABLE +``` + +## Testing everything + +The script `stage-1-network-policies-namespaces.sh` will make it possible to +test, check and clean the described configuration, as follows: + +```console +$ ./stage-1-network-policies-namespaces.sh +namespace/backend created +namespace/frontend created +namespace/external created +deployment.apps/backend created +pod/backend-86565945bf-lmzgn condition met +pod/frontend created +pod/external created +Before NetworkPolicy (frontend): REACHABLE +Before NetworkPolicy (external): REACHABLE +namespace/frontend labeled +namespace/backend labeled +namespace/external labeled +networkpolicy.networking.k8s.io/deny-all-except-frontend created +After NetworkPolicy (frontend): REACHABLE +After NetworkPolicy (external): UNREACHABLE +``` + +The output demonstrates that **before** applying the Network Policies all the +communications between `frontend`, `external` and `backend` are allowed, and +right after, just `frontend` is able to contact `backend`. + +Note that to make this work **namespaces must be labeled**. + +All the resources can be queried, and when you're done everything can be cleaned +with: + +```console +$ ./stage-1-network-policies-namespaces.sh clean +Cleaning up things... +namespace "backend" deleted +namespace "frontend" deleted +namespace "external" deleted +``` diff --git a/Workshops/Kubernetes-Security/Stage-2-Kyverno-Policy-as-Code.md b/Workshops/Kubernetes-Security/Stage-2-Kyverno-Policy-as-Code.md new file mode 100644 index 0000000..d6826a4 --- /dev/null +++ b/Workshops/Kubernetes-Security/Stage-2-Kyverno-Policy-as-Code.md @@ -0,0 +1,240 @@ +# Kubernetes Policy as Code with Kyverno + +Managing Network Policies can become quite painful when you want default +settings to be applied to the resources and your cluster. + +To create policies using code one of the best solutions is [Kyverno](https://kyverno.io). + +## Requisites + +The fastest way to install and manage Kyverno is by using Helm: + +```console +$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 +(no output) + +$ chmod 700 get_helm.sh +(no output) + +$ ./get_helm.sh +Downloading https://get.helm.sh/helm-v3.19.0-linux-amd64.tar.gz +Verifying checksum... Done. +Preparing to install helm into /usr/local/bin +helm installed into /usr/local/bin/helm + +$ helm repo add kyverno https://kyverno.github.io/kyverno/ +"kyverno" has been added to your repositories + +$ helm repo update +Hang tight while we grab the latest from your chart repositories... +...Successfully got an update from the "kyverno" chart repository +Update Complete. ⎈Happy Helming!⎈ + +$ helm upgrade --install kyverno kyverno/kyverno \ + --namespace kyverno --create-namespace \ + --set admissionController.hostNetwork=true +NAME: kyverno +LAST DEPLOYED: Tue Oct 14 13:43:56 2025 +NAMESPACE: kyverno +STATUS: deployed +REVISION: 1 +NOTES: +Chart version: 3.5.2 +Kyverno version: v1.15.2 +... +``` + +## Configure the Cluster Policies + +By default Kyverno installs various admission webhooks: + +```console +$ kubectl get validatingwebhookconfigurations | grep kyverno +kyverno-cel-exception-validating-webhook-cfg 1 17s +kyverno-cleanup-validating-webhook-cfg 1 23s +kyverno-exception-validating-webhook-cfg 1 17s +kyverno-global-context-validating-webhook-cfg 1 17s +kyverno-policy-validating-webhook-cfg 1 17s +kyverno-resource-validating-webhook-cfg 0 17s +kyverno-ttl-validating-webhook-cfg 1 23s +``` + +These are used by the custom resources like `ClusterPolicy` to implement the +various behaviors. + +For our test we're going to create two Cluster Policies, the first that will +assign a label named `name` to any created namespace: + +```yaml +apiVersion: kyverno.io/v1 +kind: ClusterPolicy +metadata: + name: add-namespace-name-label +spec: + rules: + - name: add-namespace-name-label + match: + resources: + kinds: + - Namespace + mutate: + patchStrategicMerge: + metadata: + labels: + name: "{{request.object.metadata.name}}" +``` + +And the second one that implements a "deny all" by default for each newly +created namespace: + +```yaml +apiVersion: kyverno.io/v1 +kind: ClusterPolicy +metadata: + name: add-default-deny +spec: + rules: + - name: add-default-deny + match: + resources: + kinds: + - Namespace + generate: + apiVersion: networking.k8s.io/v1 + kind: NetworkPolicy + name: default-deny-all + namespace: "{{request.object.metadata.name}}" + synchronize: true + data: + spec: + podSelector: {} + policyTypes: + - Ingress + - Egress +``` + +After applying this policy, no Pod will be able to receive nor send network +connections, and so any modification will be covered with an override. + +For this lab, we want to make the `backend` Pod in the `backend` namespace to be +reachable only by the `frontend` pod on the `frontend` namespace. + +We need two Network Policies, one for each involved namespace. + +The first one will define the `Ingress` rule so that the `frontend` pod will be +reachable by the `frontend` pod, and the `Egress` rule so that the `frontend` +pod in the `frontend` namespace will reach the `backend` pod in the `bacekend` +namespace: + +```yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-ingress-egress-from-backend + namespace: frontend +spec: + egress: + - to: + - namespaceSelector: + matchLabels: + name: backend + - podSelector: + matchLabels: + app: backend + ingress: + - from: + - namespaceSelector: + matchLabels: + name: backend + - podSelector: + matchLabels: + app: backend + podSelector: + matchLabels: + run: frontend + policyTypes: + - Ingress + - Egress +``` + +The second one will define the `Ingress` rule so that the `backend` pod in the +`backend` namespace will be reachable to the `80` port from the `frontend` pod +in the `frontend` namespace, and the `Egress` rule to allow any outgoing +connection made by the `backend` pod in the `backend` namespace: + +```yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-ingress-from-frontend-and-egress-to-any + namespace: backend +spec: + egress: + - {} + ingress: + - from: + - namespaceSelector: + matchLabels: + name: frontend + - podSelector: + matchLabels: + run: frontend + ports: + - port: 80 + protocol: TCP + podSelector: + matchLabels: + app: backend + policyTypes: + - Ingress + - Egress +``` + +## Testing everything + +The script `stage-2-use-default-network-policy-on-namespaces.sh` will make it +possible to test, check and clean the described configuration, as follows: + +```console +$ ./stage-2-default-network-policy-namespaces.sh +clusterpolicy.kyverno.io "add-namespace-name-label" deleted +clusterpolicy.kyverno.io "add-default-deny" deleted +namespace "backend" deleted +namespace "frontend" deleted +namespace "external" deleted +clusterpolicy.kyverno.io/add-namespace-name-label created +clusterpolicy.kyverno.io/add-default-deny created +namespace/backend created +namespace/frontend created +namespace/external created +deployment.apps/backend created +pod/backend-86565945bf-l9wlj condition met +pod/frontend created +pod/external created +Before NetworkPolicy (frontend): UNREACHABLE +Before NetworkPolicy (external): UNREACHABLE +networkpolicy.networking.k8s.io/allow-ingress-egress-from-backend created +networkpolicy.networking.k8s.io/allow-ingress-from-frontend-and-egress-to-any created +After NetworkPolicy (frontend): REACHABLE +After NetworkPolicy (external): UNREACHABLE +``` + +The output demonstrates that **after** adding the additional Network Policy to +allow communications between `frontend` and `backend`, just `frontend` is able +to contact `backend`. + +Note that the **namespaces are automatically labeled** by the previously created +`add-namespace-name-label` Cluster Policy. + +All the resources can be queried, and when you're done everything can be cleaned +with: + +```console +$ ./stage-2-default-network-policy-namespaces.sh clean +Cleaning up things... +clusterpolicy.kyverno.io "add-namespace-name-label" deleted +clusterpolicy.kyverno.io "add-default-deny" deleted +namespace "backend" deleted +namespace "frontend" deleted +namespace "external" deleted +``` diff --git a/Workshops/Kubernetes-Security/Stage-3-Sign-Containers-with-Cosign.md b/Workshops/Kubernetes-Security/Stage-3-Sign-Containers-with-Cosign.md new file mode 100644 index 0000000..71355e5 --- /dev/null +++ b/Workshops/Kubernetes-Security/Stage-3-Sign-Containers-with-Cosign.md @@ -0,0 +1,305 @@ +# Sign Containers with Cosign + +One of the best way to secure your own software supply chain is to _know_ +exactly what is running on your systems. This means finding a way to identify +the validity of the software. + +Since software in the cloud-native era means containers, then identifying +software means verifying the validity of the containers. + +In this lab we will implement a way to sign and verify signatures of containers +using the `cosign` tool, and then we will implement a Cluster Policy to create +an admission control based upon containers signature with Kyverno. + +## Requisites + +The `cosign` binary is available on GitHub, and can be easily installed as follows: + +```console +$ export COSIGN_VERSION=v2.6.1 +(no output) + +$ sudo curl -sSfL https://github.com/sigstore/cosign/releases/download/${COSIGN_VERSION}/cosign-linux-amd64 \ + -o /usr/local/bin/cosign + +$ sudo chmod -v +x /usr/local/bin/cosign +mode of '/usr/local/bin/cosign' changed from 0644 (rw-r--r--) to 0755 (rwxr-xr-x) + +$ cosign version + ______ ______ _______. __ _______ .__ __. + / | / __ \ / || | / _____|| \ | | +| ,----'| | | | | (----`| | | | __ | \| | +| | | | | | \ \ | | | | |_ | | . ` | +| `----.| `--' | .----) | | | | |__| | | |\ | + \______| \______/ |_______/ |__| \______| |__| \__| +cosign: A tool for Container Signing, Verification and Storage in an OCI registry. + +GitVersion: v2.6.1 +GitCommit: 634fabe54f9fbbab55d821a83ba93b2d25bdba5f +GitTreeState: clean +BuildDate: 2025-09-26T17:24:36Z +GoVersion: go1.25.1 +Compiler: gc +Platform: linux/amd64 +``` + +We will use a key pair to sign and then verify the signatures. The key pair can +be created using `cosign` as follows: + +```console +$ cosign generate-key-pair +Enter password for private key: +Enter password for private key again: +Private key written to cosign.key +Public key written to cosign.pub + +$ ls -1 cosign.* +cosign.key +cosign.pub +``` + +## Build the container + +In this example we will create a local container build to be pushed on the +GitHub registry, [ghcr.io](ghcr.io). This means that we will need to create a +token from the web interface and then login using `docker`: + +```console +$ docker login ghcr.io +Username: +Password: + +WARNING! Your credentials are stored unencrypted in '/home/kirater/.docker/config.json'. +Configure a credential helper to remove this warning. See +https://docs.docker.com/go/credential-store/ + +Login Succeeded +``` + +We will work in a working directory named `build`: + +```console +$ mkdir -v build +mkdir: created directory 'build' +``` + +That will contain the `build/Dockerfile` file: + +```Dockerfile +FROM busybox:stable + +LABEL org.opencontainers.image.description="Kiratech Training Labs Sample Containter" + +ENV NCAT_MESSAGE="Container test" +ENV NCAT_HEADER="HTTP/1.1 200 OK" +ENV NCAT_PORT="8888" + +RUN addgroup -S nonroot && \ + adduser -S nonroot -G nonroot + +COPY start-ws.sh /usr/local/bin/start-ws.sh + +USER nonroot + +CMD ["/usr/local/bin/start-ws.sh"] +``` + +And the `build/start-ws.sh` file: + +```bash +#!/bin/sh + +/bin/nc -l -k -p ${NCAT_PORT} -e /bin/echo -e "${NCAT_HEADER}\n\n${NCAT_MESSAGE}" +``` + +That will be executable: + +```console +$ chmod -v +x build/start-ws.sh +mode of 'build/start-ws.sh' changed from 0644 (rw-r--r--) to 0755 (rwxr-xr-x) +``` + +With all this in place, the build can be started: + +```console +$ docker build -f build/Dockerfile -t ncat-http-msg-port:1.0 build/ +[+] Building 1.3s (8/8) FINISHED docker:default + => [internal] load build definition from Dockerfile 0.0s + => => transferring dockerfile: 454B 0.0s + => [internal] load metadata for docker.io/library/busybox:stable 0.9s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [1/3] FROM docker.io/library/busybox:stable@sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb 0.1s + => => resolve docker.io/library/busybox:stable@sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb 0.1s + => [internal] load build context 0.1s + => => transferring context: 91B 0.0s + => CACHED [2/3] RUN addgroup -S nonroot && adduser -S nonroot -G nonroot 0.0s + => CACHED [3/3] COPY start-ws.sh /usr/local/bin/start-ws.sh 0.0s + => exporting to image 0.1s + => => exporting layers 0.0s + => => writing image sha256:3a803fd6de72c9a9188ecdc60d508dc820cfd10c590fb37c06697017cc5fcd07 0.0s + => => naming to docker.io/library/ncat-http-msg-port:1.0 +``` + +To check if the built container behaves correctly, just launch it: + +```console +$ docker run --rm --name ncat-test --detach --publish 8888:8888 ncat-http-msg-port:1.0 +3b26b79bdbdb9be63542cc6f446c21a2634b9829fbae7a3213f66a3254104231 + +$ curl localhost:8888 +Container test + +$ docker stop ncat-test +ncat-test +``` + +Since the verification is successful, it is time to tag and push the image on +the remote registry as `ghcr.io/kiratech/ncat-http-msg-port:1.0`: + +```console +$ docker tag ncat-http-msg-port:1.0 ghcr.io/kiratech/ncat-http-msg-port:1.0 +(no output) + +$ docker push ghcr.io/kiratech/ncat-http-msg-port:1.0 +The push refers to repository [ghcr.io/kiratech/ncat-http-msg-port] +9f430253f8ea: Pushed +6cd0376aea2a: Pushed +b4cb8796a924: Pushed +1.0: digest: sha256:3a803fd6de72c9a9188ecdc60d508dc820cfd10c590fb37c06697017cc5fcd07 size: 942 +``` + +**NOTE**: it might be needed to "Change visibility" under [GitHub Package Settings](https://github.com/orgs/kiratech/packages/container/ncat-http-msg-port/settings) +from `Private` to `Public`, so that the published container will be pulled +without the need of authenticating. + +## Sign the pushed container image + +To sign the container, first the digest of the pushed image is needed: + +```console +$ docker buildx imagetools inspect ghcr.io/kiratech/ncat-http-msg-port:1.0 +Name: ghcr.io/kiratech/ncat-http-msg-port:1.0 +MediaType: application/vnd.docker.distribution.manifest.v2+json +Digest: sha256:3a803fd6de72c9a9188ecdc60d508dc820cfd10c590fb37c06697017cc5fcd07 +``` + +This will be used as a reference for the signature: + +```console +$ cosign sign \ + --yes=true \ + --key cosign.key \ + ghcr.io/kiratech/ncat-http-msg-port@sha256:3a803fd6de72c9a9188ecdc60d508dc820cfd10c590fb37c06697017cc5fcd07 +Enter password for private key: + + The sigstore service, hosted by sigstore a Series of LF Projects, LLC, is provided pursuant to the Hosted Project Tools Terms of Use, available at https://lfprojects.org/policies/hosted-project-tools-terms-of-use/. + Note that if your submission includes personal data associated with this signed artifact, it will be part of an immutable record. + This may include the email address associated with the account with which you authenticate your contractual Agreement. + This information will be used for signing this artifact and will be stored in public transparency logs and cannot be removed later, and is subject to the Immutable Record notice at https://lfprojects.org/policies/hosted-project-tools-immutable-records/. + +By typing 'y', you attest that (1) you are not submitting the personal data of any other person; and (2) you understand and agree to the statement and the Agreement terms at the URLs listed above. +tlog entry created with index: 637508105 +Pushing signature to: ghcr.io/kiratech/ncat-http-msg-port + +Enter password for private key: +``` + +**NOTE**: recent cosign versions (>= 3.0.0) sign actions produce signatures that +are not recognized by Kyverno, because of the new bundle format used. To +generate signatures supported by Kyverno the two options +`--use-signing-config=false --new-bundle-format=false` needs to be passed at +command line. + +[This bug](https://github.com/sigstore/cosign/issues/4488#issuecomment-3432196825) +on the Cosign's GitHub repository is covering the issue. + +Once the container images is signer, the effective signature can be verified by +using `cosign verify`, and note that the result is the same while using the +`1.0` tag or the entire container image digest: + +```console +$ cosign verify --key cosign.pub ghcr.io/kiratech/ncat-http-msg-port:1.0 + +Verification for ghcr.io/kiratech/ncat-http-msg-port:1.0 -- +The following checks were performed on each of these signatures: + - The cosign claims were validated + - Existence of the claims in the transparency log was verified offline + - The signatures were verified against the specified public key + +[{"critical":{"identity":{"docker-reference":"ghcr.io/kiratech/ncat-http-msg-port:1.0"},"image":{"docker-manifest-digest":"sha256:3a803fd6de72c9a9188ecdc60d508dc820cfd10c590fb37c06697017cc5fcd07"},"type":"https://sigstore.dev/cosign/sign/v1"},"optional":null}] + +$ cosign verify --key cosign.pub ghcr.io/kiratech/ncat-http-msg-port@sha256:3a803fd6de72c9a9188ecdc60d508dc820cfd10c590fb37c06697017cc5fcd07 + +Verification for ghcr.io/kiratech/ncat-http-msg-port@sha256:3a803fd6de72c9a9188ecdc60d508dc820cfd10c590fb37c06697017cc5fcd07 -- +The following checks were performed on each of these signatures: + - The cosign claims were validated + - Existence of the claims in the transparency log was verified offline + - The signatures were verified against the specified public key + +[{"critical":{"identity":{"docker-reference":"ghcr.io/kiratech/ncat-http-msg-port@sha256:3a803fd6de72c9a9188ecdc60d508dc820cfd10c590fb37c06697017cc5fcd07"},"image":{"docker-manifest-digest":"sha256:3a803fd6de72c9a9188ecdc60d508dc820cfd10c590fb37c06697017cc5fcd07"},"type":"https://sigstore.dev/cosign/sign/v1"},"optional":null}] +``` + +## Create the Kyverno ClusterPolicy + +To implement a protection mechanism that will prevent non signed container +images inside the cluster, a Cluster Policy can be defined in a file named +`verify-signed-images.yaml`: + +```yaml +apiVersion: kyverno.io/v1 +kind: ClusterPolicy +metadata: + name: require-signed-images +spec: + webhookConfiguration: + failurePolicy: Fail + timeoutSeconds: 30 + background: false + rules: + - name: check-image-signature + match: + any: + - resources: + kinds: + - Pod + namespaces: + - default + verifyImages: + - imageReferences: + - "*" + failureAction: Enforce + attestors: + - entries: + - keys: + publicKeys: |- + -----BEGIN PUBLIC KEY----- + MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEttnyHZwdv2FXGGYBD7StTZ68VlmT + cmcV1SV2s8NRa8HOBzxDB2+VKKN/c74W3rK2V80pAUNGKBHjKJ4iC++Yeg== + -----END PUBLIC KEY----- +``` + +This will fail (check `webhookConfiguration`) to launch Pods that will not have +a signature based on the generated public key (check `verifyImages` section). + +## Test everything + +Testing the behavior of the cluster policy is a matter of just trying to run +Pods with signed and non signed containers: + +```console +$ kubectl run goodpod --image=ghcr.io/kiratech/ncat-http-msg-port:1.0 +pod/goodpod created + +$ kubectl get pod goodpod -o jsonpath='{.metadata.annotations.kyverno\.io\/verify-images}'; echo +{"ghcr.io/kiratech/ncat-http-msg-port@sha256:3a803fd6de72c9a9188ecdc60d508dc820cfd10c590fb37c06697017cc5fcd07":"pass"} + +$ kubectl run notgoodpod --image=nginx +Error from server: admission webhook "mutate.kyverno.svc-fail" denied the request: + +resource Pod/default/notgoodpod was blocked due to the following policies + +require-signed-images: + check-image-signature: 'failed to verify image docker.io/nginx:latest: .attestors[0].entries[0].keys: + no signatures found' +``` diff --git a/Workshops/Kubernetes-Security/Stage-4-Policy-Reporter-Visualization.md b/Workshops/Kubernetes-Security/Stage-4-Policy-Reporter-Visualization.md new file mode 100644 index 0000000..dfc11fb --- /dev/null +++ b/Workshops/Kubernetes-Security/Stage-4-Policy-Reporter-Visualization.md @@ -0,0 +1,293 @@ +# Policy Reporter - Visualizing Kyverno Policies + +Policy Reporter is a monitoring and observability tool for Kubernetes that +visualizes PolicyReport CRDs generated by Kyverno and other policy engines. + +It provides a web UI, Prometheus metrics integration, and notification support +for Slack, Discord, MS Teams, and more, helping teams understand policy +compliance across their clusters. + +Check [the official Policy Reporter documentation](https://kyverno.github.io/policy-reporter-docs/) +and [Kyverno documentation](https://kyverno.io/docs/) for more details. + +## Requisites + +Policy Reporter requires Kyverno to be installed and running in your cluster. +If you haven't completed Stage 2, install Kyverno first + +**Important**: For Kubernetes 1.29 or older, use Kyverno version 3.1.4 or +earlier to avoid compatibility issues with ValidatingAdmissionPolicy APIs. + +## Installing Policy Reporter + +Add the Policy Reporter Helm repository and install it with UI and Kyverno +plugin enabled: + +```console +$ helm repo add policy-reporter https://kyverno.github.io/policy-reporter +"policy-reporter" has been added to your repositories + +$ helm repo update +Hang tight while we grab the latest from your chart repositories... +...Successfully got an update from the "policy-reporter" chart repository +Update Complete. ⎈Happy Helming!⎈ + +$ helm install policy-reporter policy-reporter/policy-reporter \ + --create-namespace -n policy-reporter \ + --set ui.enabled=true \ + --set kyvernoPlugin.enabled=true \ + --set ui.plugins.kyverno=true +NAME: policy-reporter +LAST DEPLOYED: Thu Dec 19 17:05:00 2025 +NAMESPACE: policy-reporter +STATUS: deployed +REVISION: 1 +... + +$ kubectl get pods -n policy-reporter +NAME READY STATUS RESTARTS AGE +policy-reporter-7d6b5c8f9d-4xzmh 1/1 Running 0 45s +policy-reporter-ui-5f7b9d6c8d-2pqrt 1/1 Running 0 45s +``` + +## Testing Policy Reports + +You can use the provided script to automatically create test policies and pods: + +```console +$ chmod +x stage-4-policy-reporter-visualization.sh +(no output) + +$ ./stage-4-policy-reporter-visualization.sh + +═══════════════════════════════════════════════════════════ + Stage 4: Policy Reporter Visualization - Testing +═══════════════════════════════════════════════════════════ + +ℹ Checking prerequisites... +✓ Prerequisites check passed + +═══════════════════════════════════════════════════════════ + Creating Test Environment +═══════════════════════════════════════════════════════════ + +ℹ Creating namespace policy-test... +namespace/policy-test created +✓ Namespace created +... +``` + +The script creates two ClusterPolicies and three test pods to demonstrate policy +violations and compliant resources. + +## Viewing Policy Reports + +After the test script runs, Kyverno generates PolicyReports for the resources: + +```console +$ kubectl get policyreport -n policy-test +NAME KIND NAME PASS FAIL WARN ERROR SKIP AGE +5d653b12-8fe9-4e17-8464-551f35fb76d5 Pod non-compliant-missing-labels 2 1 0 0 0 7s +843dab32-08bc-41de-9af3-1234a84a365e Pod non-compliant-latest-tag 2 1 0 0 0 7s +d7908844-98a5-446a-a170-8faacfdb2741 Pod compliant-pod 3 0 0 0 0 8s + +kubectl -n policy-test describe policyreport 5d653b12-8fe9-4e17-8464-551f35fb76d5 +Name: 5d653b12-8fe9-4e17-8464-551f35fb76d5 +Namespace: policy-test +Labels: app.kubernetes.io/managed-by=kyverno +Annotations: +API Version: wgpolicyk8s.io/v1alpha2 +Kind: PolicyReport +Metadata: + Creation Timestamp: 2025-12-19T16:46:07Z + Generation: 2 + Owner References: + API Version: v1 + Kind: Pod + Name: non-compliant-missing-labels + UID: 5d653b12-8fe9-4e17-8464-551f35fb76d5 + Resource Version: 20117 + UID: 5b938b66-b135-4427-860c-35f8711f7f30 +Results: + Category: Best Practices + Message: validation rule 'require-image-tag' passed. + Policy: disallow-latest-tag + Result: pass + Rule: require-image-tag + Scored: true + Severity: medium + Source: kyverno + Timestamp: + Nanos: 0 + Seconds: 1766162787 + Category: Best Practices + Message: validation rule 'require-image-tag-initcontainers' passed. + Policy: disallow-latest-tag + Result: pass + Rule: require-image-tag-initcontainers + Scored: true + Severity: medium + Source: kyverno + Timestamp: + Nanos: 0 + Seconds: 1766162787 + Category: Best Practices + Message: validation error: Pods must have 'app' and 'owner' labels. rule check-for-labels failed at path /metadata/labels/owner/ + Policy: require-labels + Result: fail + Rule: check-for-labels + Scored: true + Severity: medium + Source: kyverno + Timestamp: + Nanos: 0 + Seconds: 1766162787 +Scope: + API Version: v1 + Kind: Pod + Name: non-compliant-missing-labels + Namespace: policy-test + UID: 5d653b12-8fe9-4e17-8464-551f35fb76d5 +Summary: + Error: 0 + Fail: 1 + Pass: 2 + Skip: 0 + Warn: 0 +Events: +... +``` + +## Accessing the Policy Reporter UI + +Start a port forward to access the web UI: + +```console +$ kubectl port-forward -n policy-reporter svc/policy-reporter-ui 8082:8080 +Forwarding from 127.0.0.1:8082 -> 8080 +Forwarding from [::1]:8082 -> 8080 +``` + +Open your browser and navigate to `http://localhost:8082`. + +The UI provides: + +- **Dashboard**: Overview of all policy results with pass/fail/warn/error counts +- **Policy Dashboard**: Detailed view filtered by namespace, policy, and severity +- **Kyverno**: Enhanced policy details with annotations and exceptions + +## Cleanup + +### Using the Script + +The test script includes an interactive cleanup option at the end. If you didn't +run cleanup during the script execution, you can manually clean up: + +```console +$ kubectl delete pod -n policy-test --all +pod "compliant-pod" deleted +pod "non-compliant-latest-tag" deleted +pod "non-compliant-missing-labels" deleted + +$ kubectl delete clusterpolicy require-labels disallow-latest-tag +clusterpolicy.kyverno.io "require-labels" deleted +clusterpolicy.kyverno.io "disallow-latest-tag" deleted + +$ kubectl delete namespace policy-test +namespace "policy-test" deleted +``` + +### Manual Cleanup + +To remove Policy Reporter: + +```console +$ helm uninstall policy-reporter -n policy-reporter +release "policy-reporter" uninstalled + +$ kubectl delete namespace policy-reporter +namespace "policy-reporter" deleted +``` + +## Troubleshooting + +### Kyverno admission controller not ready + +If Kyverno's admission controller pod shows `0/1 READY`, it may be having TLS +certificate issues. This is common on Kind clusters: + +```console +$ kubectl get pods -n kyverno +NAME READY STATUS RESTARTS AGE +kyverno-admission-controller-6bd675cdb7-7m8c2 0/1 Running 1 3m +kyverno-background-controller-5c7f9889d-7dfkn 1/1 Running 0 3m +kyverno-cleanup-controller-59c4b88dbc-6lpp2 1/1 Running 0 3m +kyverno-reports-controller-856b76d78d-2rq5f 1/1 Running 0 3m + +$ kubectl logs -n kyverno kyverno-admission-controller-6bd675cdb7-7m8c2 | grep TLS +http: TLS handshake error: secret "kyverno-svc.kyverno.svc.kyverno-tls-pair" not found +``` + +Solution: Reinstall Kyverno with more tolerant health probes: + +```console +$ helm uninstall kyverno -n kyverno +release "kyverno" uninstalled + +$ helm install kyverno kyverno/kyverno -n kyverno \ + --create-namespace --version 3.1.4 \ + --set admissionController.startupProbe.failureThreshold=30 \ + --set admissionController.startupProbe.periodSeconds=10 +... +``` + +### No policy reports showing + +If `kubectl get policyreport` returns no results, verify that: + +Kyverno reports controller is running: + +```console +$ kubectl get pods -n kyverno -l app.kubernetes.io/component=reports-controller +NAME READY STATUS RESTARTS AGE +kyverno-reports-controller-856b76d78d-2rq5f 1/1 Running 0 5m +``` + +PolicyReport CRDs are installed: + +```console +$ kubectl get crd | grep policyreport +clusterpolicyreports.wgpolicyk8s.io 2025-12-19T15:25:04Z +policyreports.wgpolicyk8s.io 2025-12-19T15:25:04Z +``` + +Policies are in Audit mode (not Enforce) and have `background: true`: + +```console +$ kubectl get clusterpolicy require-labels -o yaml | grep -A 2 "spec:" +spec: + background: true + validationFailureAction: Audit +``` + +If reports still don't appear, recreate the resources to trigger evaluation: + +```console +$ kubectl delete pod -n policy-test --all +pod "compliant-pod" deleted + +$ kubectl apply -f your-test-pods.yaml +pod/compliant-pod created +``` + +### Compatibility errors with ValidatingAdmissionPolicy + +If you see errors like `failed to list *v1.ValidatingAdmissionPolicy` in +Kyverno logs, your Kyverno version is too new for your Kubernetes cluster: + +```console +$ kubectl logs -n kyverno -l app.kubernetes.io/component=reports-controller | grep ValidatingAdmission +Failed to watch error="failed to list *v1.ValidatingAdmissionPolicyBinding: the server could not find the requested resource" +``` + +Solution: Downgrade to a compatible version (see Requisites section above). diff --git a/Workshops/Kubernetes-Security/stage-1-network-policies-namespaces.sh b/Workshops/Kubernetes-Security/stage-1-network-policies-namespaces.sh new file mode 100755 index 0000000..0b196d6 --- /dev/null +++ b/Workshops/Kubernetes-Security/stage-1-network-policies-namespaces.sh @@ -0,0 +1,58 @@ +#!/bin/bash + +clean() { + kubectl delete namespace backend frontend external +} + +if [ "$1" == "clean" ]; then + echo "Cleaning up things..." + clean + exit $? +fi + +kubectl create namespace backend +kubectl create namespace frontend +kubectl create namespace external + +kubectl --namespace backend create deployment backend --image nginx:latest +kubectl wait --namespace backend --for=condition=ready pod --selector=app=backend --timeout=90s + +kubectl -n frontend run frontend --image=curlimages/curl:latest --restart=Never -- /bin/sh -c "while true; do sleep 3600; done" +kubectl -n external run external --image=curlimages/curl:latest --restart=Never -- /bin/sh -c "while true; do sleep 3600; done" + +BACKENDIP=$(kubectl -n backend get pod -l app=backend -o jsonpath="{.items[0].status.podIP}") +FRONTENDPOD=$(kubectl -n frontend get pod -l run=frontend -o jsonpath='{.items[0].metadata.name}') +EXTERNALPOD=$(kubectl -n external get pod -l run=external -o jsonpath='{.items[0].metadata.name}') + +sleep 3 + +echo -n "Before NetworkPolicy (frontend): " +kubectl -n frontend exec -it $FRONTENDPOD -- curl -s --connect-timeout 5 $BACKENDIP > /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE +echo -n "Before NetworkPolicy (external): " +kubectl -n external exec -it $EXTERNALPOD -- curl -s --connect-timeout 5 $BACKENDIP > /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE + +kubectl label namespace frontend name=frontend +kubectl label namespace backend name=backend +kubectl label namespace external name=external + +kubectl create -f - < /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE +echo -n "After NetworkPolicy (external): " +kubectl -n external exec -it $EXTERNALPOD -- curl -s --connect-timeout 5 $BACKENDIP > /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE diff --git a/Workshops/Kubernetes-Security/stage-2-use-default-network-policy-on-namespaces.sh b/Workshops/Kubernetes-Security/stage-2-use-default-network-policy-on-namespaces.sh new file mode 100755 index 0000000..ef38037 --- /dev/null +++ b/Workshops/Kubernetes-Security/stage-2-use-default-network-policy-on-namespaces.sh @@ -0,0 +1,141 @@ +#!/bin/bash + +clean() { + kubectl delete --wait clusterpolicies add-namespace-name-label add-default-deny + kubectl delete --wait namespace backend frontend external +} + +if [ "$1" == "clean" ]; then + echo "Cleaning up things..." + clean + exit $? +fi + +kubectl create -f - < /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE +echo -n "Before NetworkPolicy (external): " +kubectl -n external exec -it $EXTERNALPOD -- curl -s --connect-timeout 5 $BACKENDIP > /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE + +kubectl create -f - < /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE +echo -n "After NetworkPolicy (external): " +kubectl -n external exec -it $EXTERNALPOD -- curl -s --connect-timeout 5 $BACKENDIP > /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE diff --git a/Workshops/Kubernetes-Security/stage-4-policy-reporter-visualization.sh b/Workshops/Kubernetes-Security/stage-4-policy-reporter-visualization.sh new file mode 100755 index 0000000..a37dec0 --- /dev/null +++ b/Workshops/Kubernetes-Security/stage-4-policy-reporter-visualization.sh @@ -0,0 +1,289 @@ +#!/bin/bash + +set -e + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' # No Color + +# Function to print colored output +print_info() { + echo -e "${BLUE}ℹ${NC} $1" +} + +print_success() { + echo -e "${GREEN}✓${NC} $1" +} + +print_warning() { + echo -e "${YELLOW}⚠${NC} $1" +} + +print_error() { + echo -e "${RED}✗${NC} $1" +} + +print_header() { + echo "" + echo -e "${GREEN}═══════════════════════════════════════════════════════════${NC}" + echo -e "${GREEN} $1${NC}" + echo -e "${GREEN}═══════════════════════════════════════════════════════════${NC}" + echo "" +} + +# Function to wait for resources +wait_for_resource() { + local resource=$1 + local name=$2 + local namespace=$3 + local timeout=${4:-30} + + print_info "Waiting for $resource $name in namespace $namespace..." + local counter=0 + while [ $counter -lt $timeout ]; do + if kubectl get $resource $name -n $namespace &> /dev/null; then + print_success "$resource $name is ready" + return 0 + fi + sleep 2 + counter=$((counter + 2)) + done + print_warning "Timeout waiting for $resource $name" + return 1 +} + +print_header "Stage 4: Policy Reporter Visualization - Testing" + +# Check prerequisites +print_info "Checking prerequisites..." + +if ! kubectl get namespace kyverno &> /dev/null; then + print_error "Kyverno is not installed. Please install it first." + echo "Run: helm install kyverno kyverno/kyverno -n kyverno --create-namespace --version 3.1.4" + exit 1 +fi + +if ! kubectl get namespace policy-reporter &> /dev/null; then + print_error "Policy Reporter is not installed. Please install it first." + echo "See Stage-4-Policy-Reporter-Visualization.md for installation instructions." + exit 1 +fi + +print_success "Prerequisites check passed" + +# Create test namespace +print_header "Creating Test Environment" + +print_info "Creating namespace policy-test..." +kubectl create namespace policy-test --dry-run=client -o yaml | kubectl apply -f - +print_success "Namespace created" + +# Create Kyverno policies +print_header "Creating Kyverno Policies" + +print_info "Creating 'require-labels' policy..." +cat <- + Requires all pods to have 'app' and 'owner' labels. +spec: + validationFailureAction: Audit + background: true + rules: + - name: check-for-labels + match: + any: + - resources: + kinds: + - Pod + validate: + message: "Pods must have 'app' and 'owner' labels." + pattern: + metadata: + labels: + app: "?*" + owner: "?*" +EOF +print_success "Policy 'require-labels' created" + +print_info "Creating 'disallow-latest-tag' policy..." +cat <- + Disallow use of the 'latest' tag in container images. +spec: + validationFailureAction: Audit + background: true + rules: + - name: require-image-tag + match: + any: + - resources: + kinds: + - Pod + validate: + message: "Using 'latest' tag is not allowed. Specify a version tag." + pattern: + spec: + containers: + - image: "!*:latest" + - name: require-image-tag-initcontainers + match: + any: + - resources: + kinds: + - Pod + validate: + message: "Using 'latest' tag is not allowed in init containers." + pattern: + spec: + =(initContainers): + - image: "!*:latest" +EOF +print_success "Policy 'disallow-latest-tag' created" + +# Deploy test pods +print_header "Deploying Test Pods" + +print_info "Creating compliant pod (has labels, versioned image)..." +cat < /dev/null; then + echo "" + kubectl get policyreport -n policy-test + echo "" + print_info "Detailed policy report:" + echo "" + kubectl describe policyreport -n policy-test | grep -A 50 "Results:" || kubectl describe policyreport -n policy-test +else + print_warning "No policy reports found yet" + print_info "Policy reports might take a moment to appear" +fi + +# Show how to access UI +print_header "Accessing Policy Reporter UI" + +echo "" +print_info "To view the reports in the Policy Reporter UI:" +echo "" +echo " 1. Start port forwarding:" +echo " ${GREEN}kubectl port-forward -n policy-reporter svc/policy-reporter-ui 8082:8080${NC}" +echo "" +echo " 2. Open your browser:" +echo " ${GREEN}http://localhost:8082${NC}" +echo "" +echo " 3. Navigate to the 'policy-test' namespace to see violations" +echo "" + +# Cleanup function +cleanup() { + print_header "Cleanup" + + print_info "Deleting test pods..." + kubectl delete pod -n policy-test --all --ignore-not-found=true + print_success "Test pods deleted" + + print_info "Deleting cluster policies..." + kubectl delete clusterpolicy require-labels disallow-latest-tag --ignore-not-found=true + print_success "Cluster policies deleted" + + print_info "Deleting test namespace..." + kubectl delete namespace policy-test --ignore-not-found=true + print_success "Test namespace deleted" + + echo "" + print_success "Cleanup completed!" +} + +# Ask for cleanup +echo "" +read -p "$(echo -e ${YELLOW}Do you want to cleanup test resources? [y/N]:${NC} )" -n 1 -r +echo "" + +if [[ $REPLY =~ ^[Yy]$ ]]; then + cleanup +else + print_info "Cleanup skipped" + echo "" + print_warning "To cleanup later, run the following commands:" + echo " kubectl delete namespace policy-test" + echo " kubectl delete clusterpolicy require-labels disallow-latest-tag" +fi + +print_header "Stage 4 Testing Complete" +echo "" +print_success "All tests completed successfully!" +echo ""