Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion Mastering-Bash/Bash-Exit-Status.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,5 +79,6 @@ In this lab you will:
exit 0
fi

exit 1
# Exit status 3 will be returned if no previous condition was met.
exit 3
```
4 changes: 2 additions & 2 deletions Mastering-Bash/Bash-Expansions.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,11 @@ In this lab you will:
leave the default value for the second run:

```console
$ ./default_answer pizza
$ ./default_answer.sh pizza
The answer to life, the universe and everything is pizza
Name of the script is ./default_answer

$ ./default_answer
$ ./default_answer.sh
The answer to life, the universe and everything is 42
Name of the script is ./default_answer
```
10 changes: 6 additions & 4 deletions Mastering-Bash/Bash-Outputs.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ In this lab you will:
5. Append to the previously created file all the lines that contain a `:9` from
`/etc/group` file.
6. Sort the content of `results.txt` by name.
7. Use the `less` pager to view the content of `/var/log/boot.log` and invoke an
7. Use the `less` pager to view the content of `/etc/vimrc` and invoke an
editor to edit the file.

## Solution
Expand Down Expand Up @@ -85,12 +85,14 @@ In this lab you will:
unbound:x:994:
```

7. Use the less pager to view the content of `/var/log/boot.log` and invoke an
7. Use the less pager to view the content of `/etc/vimrc` and invoke an
editor to edit the file:

```console
$ less /var/log/boot.log
$ less /etc/vimrc
(less interface opens)
```

and then press `v`.
and then press `v`. You will see the `less` interface turning into a `vim`
one, but without the opportunity to make any change, since the file is not
owned by the `kirater` unprivileged user.
2 changes: 1 addition & 1 deletion Mastering-Bash/Bash-Signals-Kill.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ In this lab you will:
1. Launch the sleep command and then press `CTLR+Z`. Result should be:

```console
[kirater@machine ~] $ sleep 100
$ sleep 100
^Z
[1]+ Stopped sleep 100
```
Expand Down
39 changes: 39 additions & 0 deletions Workshops/Kubernetes-Security/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Kubernetes Security Workshop

## Environment architecture

The overall architecture of this workshop project is based upon a single
Minikube cluster installation.

Everything is meant to be created on a physical machine or a virtual one
with, *at least*, 2 CPU and 4 Gigabytes of RAM. 4 CPU and 8 Gigabytes of RAM
will be ideal.

Software requirements for the main machine are essentially just the Docker
service, everything else will be covered in the various stages.

The outputs reported in the various stages were taken from a [AlmaLinux 9](https://repo.almalinux.org/almalinux/9/cloud/x86_64/images/AlmaLinux-9-GenericCloud-latest.x86_64.qcow2)
virtual machine with 4 CPUs and 8 Gigabytes of RAM.

## Workshop structure

The structure of the workshop will be based on stages:

- Stage 0: [Install Minikube](../../Common/Kubernetes-Install-Minikube.md)
- Stage 1: [Network Policies](Stage-1-Network-Policies.md).
- Stage 2: [Kyverno, Policy as Code](Stage-2-Kyverno-Policy-as-Code.md).
- Stage 3: [Cosign, Sign Container Images](Stage-3-Sign-Containers-with-Cosign.md).
- Stage 4: [Policy Reporter UI](Stage-4-Policy-Reporter-Visualization.md).

## References

There are several technologies covered in this workshop, the main ones are
listed here:

- [Kubernetes](https://kubernetes.io/), the container orchestration platform.
- [Kyverno](https://kyverno.io/), declarative Policy as Code for Kubernetes.
- [Cosign](https://github.com/sigstore/cosign), OCI containers signature tool.

## Author

Raoul Scarazzini ([raoul.scarazzini@kiratech.it](mailto:raoul.scarazzini@kiratech.it))
186 changes: 186 additions & 0 deletions Workshops/Kubernetes-Security/Stage-1-Network-Policies.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,186 @@
# Kubernetes Network Policies

Network Policies work in Kubernetes at application level and help you to control
traffic flow at the IP address or port level for TCP, UDP, and SCTP protocols.

Check [the official Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/#network-traffic-filtering)
to catch all the details about these components.

## Requisites

Network Policies must be supported by the network plugin. In Minikube, by
default, there is no such kind of support, and so Minikube should be started by
using a different network plugin, like Calico:

```console
$ minikube stop && minikube delete && minikube start --cni=calico
* Stopping node "minikube" ...
* Powering off "minikube" via SSH ...
* 1 node stopped.
* Deleting "minikube" in docker ...
* Deleting container "minikube" ...
* Removing /home/kirater/.minikube/machines/minikube ...
* Removed all traces of the "minikube" cluster.
* minikube v1.37.0 on Almalinux 9.4 (kvm/amd64)
* Automatically selected the docker driver. Other choices: none, ssh
* Using Docker driver with root privileges
* Starting "minikube" primary control-plane node in "minikube" cluster
* Pulling base image v0.0.48 ...
* Creating docker container (CPUs=2, Memory=3900MB) ...
* Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
* Configuring Calico (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
```

## Network Policies with namespaces

Given a situation where you have these three namespaces:

kubectl delete namespace backend frontend external

```console
$ kubectl create namespace backend
namespace "backend" created

$ kubectl create namespace frontend
namespace "frontend" created

$ kubectl create namespace external
namespace "external" created
```

And inside the `backend` namespace you have a deployment with a listening
service, in this case `nginx`:

```console
$ kubectl --namespace backend create deployment backend --image nginx:latest
deployment backend created

$ kubectl wait --namespace backend --for=condition=ready pod --selector=app=backend --timeout=90s
pod/backend-86565945bf-xqxdr condition met
```

By creating two other applications on the other two namespaces you can simulate
connections coming from different places:

```console
$ kubectl -n frontend run frontend --image=curlimages/curl:latest --restart=Never -- /bin/sh -c "while true; do sleep 3600; done"
pod/frontend created

$ kubectl -n external run external --image=curlimages/curl:latest --restart=Never -- /bin/sh -c "while true; do sleep 3600; done"
pod/external created
```

Once you get the backend POD IP address and the name of the two client PODs:

```console
$ BACKENDIP=$(kubectl -n backend get pod -l app=backend -o jsonpath="{.items[0].status.podIP}")
(no output)

$ FRONTENDPOD=$(kubectl -n frontend get pod -l run=frontend -o jsonpath='{.items[0].metadata.name}')
(no output)

$ EXTERNALPOD=$(kubectl -n external get pod -l run=external -o jsonpath='{.items[0].metadata.name}')
(no output)
```

You can check what is the behavior without a Network Policy, so that the
connections should work for both clients:

```console
$ kubectl -n frontend exec -it $FRONTENDPOD -- curl -s --connect-timeout 5 $BACKENDIP > /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE
REACHABLE

$ kubectl -n external exec -it $EXTERNALPOD -- curl -s --connect-timeout 5 $BACKENDIP > /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE
REACHABLE
```

Then, to test Network Policies, each namespace should get a label, as follows:

```console
$ kubectl label namespace backend name=backend
namespace/backend labeled

$ kubectl label namespace frontend name=frontend
namespace/frontend labeled

$ kubectl label namespace external name=external
namespace/external labeled
```

And then it will be possible to create a Network Policy that will allow just the
connections coming from the `frontend` labeled namespace:

```console
$ kubectl create -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-except-frontend
namespace: backend
spec:
podSelector: {} # applies to all pods in the backend namespace
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: frontend
EOF
```

A new test should confirm that now only `frontend` PODs are able to access
`$BACKENDIP`:

```console
$ kubectl -n frontend exec -it $FRONTENDPOD -- curl -s --connect-timeout 5 $BACKENDIP > /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE
REACHABLE

$ kubectl -n external exec -it $EXTERNALPOD -- curl -s --connect-timeout 5 $BACKENDIP > /dev/null 2>&1 && echo REACHABLE || echo UNREACHABLE
UNREACHABLE
```

## Testing everything

The script `stage-1-network-policies-namespaces.sh` will make it possible to
test, check and clean the described configuration, as follows:

```console
$ ./stage-1-network-policies-namespaces.sh
namespace/backend created
namespace/frontend created
namespace/external created
deployment.apps/backend created
pod/backend-86565945bf-lmzgn condition met
pod/frontend created
pod/external created
Before NetworkPolicy (frontend): REACHABLE
Before NetworkPolicy (external): REACHABLE
namespace/frontend labeled
namespace/backend labeled
namespace/external labeled
networkpolicy.networking.k8s.io/deny-all-except-frontend created
After NetworkPolicy (frontend): REACHABLE
After NetworkPolicy (external): UNREACHABLE
```

The output demonstrates that **before** applying the Network Policies all the
communications between `frontend`, `external` and `backend` are allowed, and
right after, just `frontend` is able to contact `backend`.

Note that to make this work **namespaces must be labeled**.

All the resources can be queried, and when you're done everything can be cleaned
with:

```console
$ ./stage-1-network-policies-namespaces.sh clean
Cleaning up things...
namespace "backend" deleted
namespace "frontend" deleted
namespace "external" deleted
```
Loading