Reflection of the build status:
Table of Contents
- Introduction
- Course Information
- Information about the Project
- What I have done as Part of the Project
- Task 1 - Creating and executing Sprint Plans
-
Task 2 - Developing a RESTful Service using Test Driven Development
- REST API Guidelines
- Setting up the Development Environment
- Implementing API Endpoint - Read an Account
- Implementing API Endpoint - Update an Account
- Implementing API Endpoint - Delete an Account
- Implementing API Endpoint - List all Accounts
- Improving Total Code Coverage
- Demonstration of the REST API
- Task 3 - Adding Continuous Integration and Security to the Repository
- Task 4 - Deploying the Application to Kubernetes
- Task 5 - Building an automated CD DevOps Pipeline
- Getting Started
- Contact
This capstone project / repository was created as part of IBM's DevOps and Software Engineering program.
A template was used - see IBM repository: https://github.com/ibm-developer-skills-network/aolwx-devops-capstone-template.
Many thanks to the IBM course team as well as John Rofrano (Primary Instructor) and all the other contributors!
You have been asked by the customer account manager at your company to
develop an account microservice to keep track of the customers
on your e-commerce website. Since it is a microservice, it is expected
to have a well-formed REST API that other microservices can call.
This service initially needs to create, read, update, delete,
and list customers.
You have also been told that someone else has started on this task.
They have already developed the database model and a Python
Flask-based REST API with an endpoint to create a customer account.
Tasks that need to be completed:
- Create and execute sprint plans
- Develop a RESTful Service using Test Driven Development (TDD)
- Add Continuous Integration (CI) and Security to the Repository
- Deploy the application to Kubernetes
- Build an automated CD DevOps Pipeline
Preview images of the project:
Title: DevOps and Software Engineering
Type: Capstone Project
Course Provider: IBM
- Client: Myself
- Project Goal: Demonstrate the knowledge gained in the IBM Program
DevOps and Software Engineeringwith a Capstone Project. - Number of Project Participants: 1 (Cloned repository of IBM. Developed the rest on my own)
- Time Period: December, 2024
- Industry / Area: DevOps, Software Engineering
- Role: Developer
- Languages: English
- Result: Successfully built an account microservice. Demonstrated skills and acquired new knowledge.
With regard to my role:
- GitHub (Version Control, Kanban Board, Actions, ...)
- IBM Cloud IDE (based on Theia and Container)
- Programming Language: Python
- Python Webframework: Flask
- Python Unittest Framework: nose
- Python Linting: Flake8
- Containerization: Docker
- Container Orchestration: Kubernetes / OpenShift
- Image Registry: IBM Cloud Container Registry
- CI/CD Tool: Tekton
The RESTful microservice is created with the help of an agile plan (Scrum).
This means, the first task is the Sprint 0.
The main goal of Sprint 0 is to set up the team for future delivery by creating the basic project skeleton, defining the vision and preparing the product backlog.
First, a user story template was created (can be found in: .github/ISSUE_TEMPLATE):
The template provides the basis for the user stories to be created for the sprints.
It uses the Gherkin Syntax.
Gherkin is a simple description language with very few rules for the structured formulation of scenarios in the context of behavior-driven software development according to BDD principles.
Next, the user stories were created.
The titles were provided by IBM as part of the project (e.g. Update an account in the service) and I filled them with content.
Two examples (Note: Screenshots were taken later, therefore they are already labeled and assigned to a project):
The acceptance criteria define the status of Done.
After the user stories were completed, a GitHub project (Kanban board) was created.
All issues were assigned to the New Issues column:
The issues were then moved either to the icebox or the product backlog, depending on their priority.
For example, deploying is one of the last steps, which is why it ended up in the icebox.
The priorities of the issues in the backlog have also been defined.
P0 is the highest priority and is therefore listed at the top.
P2 is the lowest priority and is therefore listed at the bottom.
The following was also assigned to the issues at the end:
- An iteration field
Sprintwhich has a duration of 1 week. At this point, the date is 14/12/2024. This means that the 1st sprint starts on 14/12/2024 and ends on 20/12/2024. - The size and estimated story points were also determined. The scale (provided by IBM) is 3, 5, 8, 13 = S, M, L, XL.
The issues were moved from the product backlog to the sprint backlog and the result is as follows:
Task 1 is finished. Task 2 can be started.
In this task, the REST API is expanded to include additional endpoints.
Test Driven Development is used in this project.
This means that the tests are always written first and then the actual code that is to fulfill the tests.
The following rule applies: Code coverage must be at least 95 %.
First, an overview of the API guidelines.
Development is based on these.
The REST API guidelines were specified by IBM:
Something is missing in the set-up of the development environment: Configuring the nosetests command with additional options.
This will save us typing work when carrying out unit tests in the future.
The modified setup.cfg file:
This completes the user story for setting up the development environment and the Kanban board is updated:
At the same time, the next user story was defined: Read an account from service (see column In Progress).
Following the TDD approach, the test cases to be fulfilled were defined first:
The code was then written to fulfill the tests.
The code coverage is more than 95%.
This means that everything fits.
At the same time, the next user story was defined: Update an account from service (see column In Progress):
Following the TDD approach, the test cases to be fulfilled were defined first:
The code was then written to fulfill the tests.
The code coverage is more than 95%.
This means that everything fits.
At the same time, the next user story was defined: Delete an account from service (see column In Progress):
Following the TDD approach, the test cases to be fulfilled were defined first:
The code was then written to fulfill the tests.
The code coverage is more than 95%.
This means that everything fits.
At the same time, the next user story was defined: List all accounts from service (see column In Progress):
Following the TDD approach, the test cases to be fulfilled were defined first:
The code was then written to fulfill the tests.
The code coverage is more than 95%.
This means that everything fits.
The Kanban Board has been updated:
As an additional task, the total code coverage is being improved.
The total code coverage is currently below 95 %.
There is a lot of potential for improvement in some areas (e.g. error_handlers.py file):
A new test file was created to test the error handlers: tests/test_error_handlers.py.
The test suite for error handlers was set up (setup and teardown) and the unit tests were written:
The goal was achieved: The total code coverage is now above 95 %.
Now comes the last step of the task: Demonstrating the REST API.
First, local access to the service is enabled.
The following command is used to refresh the database:
flask db-create
Then use the command below to start the service with the new database:
make run
The terminal output:
The application is started using the Launch Application function in the IBM Cloud IDE.
A port number is also required to start the application.
According to the terminal, the application listens to port 5000, so port 5000 is entered / required.
The output:
The service is now running.
The curl command is used to make REST calls to the implemented endpoints.
The demonstration of Create an Account endpoint using the following command:
curl -i -X POST http://127.0.0.1:5000/accounts \
-H "Content-Type: application/json" \
-d '{"name":"John Doe","email":"john@doe.com","address":"123 Main St.","phone_number":"555-1212"}'
The demonstration of List all Accounts endpoint using the following command:
curl -i -X GET http://127.0.0.1:5000/accounts
The demonstration of Read an Account endpoint using the following command:
curl -i -X GET http://127.0.0.1:5000/accounts/1
The demonstration of Update an Account endpoint using the following command:
curl -i -X PUT http://127.0.0.1:5000/accounts/1 \
-H "Content-Type: application/json" \
-d '{"name":"John Doe","email":"john@doe.com","address":"123 Main St.","phone_number":"555-1111"}'
The difference: The phone number ends now with a 1 instead of 2 (555-1112 -> 555-1111).
The demonstration of Delete an Account endpoint using the following command:
curl -i -X DELETE http://127.0.0.1:5000/accounts/1
After deletion, all accounts were displayed to show that the account had actually been deleted.
The list is empty, so the account has been deleted.
The REST API has been finished, Sprint 1 is now complete and the next task can now be started: Task 3 - Add Continuous Integration and Security to the Repository.
In Task 3, a new scenario was added (defined by IBM):
Management has been looking for ways to increase developer productivity
and has noticed that developers spend a lot of time checking
that all the tests pass before approving each pull request.
Management has decided it is time to automate this task
by implementing continuous integration (CI) using GitHub Actions.
There have also been many stories in the news about security breaches
and exploits, and management is concerned about the security of your microservice.
In an effort to be proactive, they have decided that you need to
add defensive security measures to your microservice in the
form of security headers and cross-origin resource sharing (CORS) policies.
Two new user stories were created to fulfill the requirements.
This time, these were specified by IBM.
As Sprint 1 is complete, Sprint 2 is also planned. The newly created user stories are added here.
Note: And yes... the week of Sprint 1 hasn't actually passed ;-)
Currently on vacation from work, so lots of time. And I want to finish the project before Christmas (day I'm writing this: 16/12/2024).
The first user story - Need the Ability to Automate Continuous Integration Checks:
The second user story - Need to Add Security Headers and CORS Policies:
The updated Kanban Board / Sprint Plan 2:
These stories are now being implemented.
A key practice in DevOps is Continuous Integration (CI), where developers continuously integrate their code into the main branch by making frequent pull requests.
To make life easier for developers, a CI pipeline is now being implemented with the help of GitHub Actions.
I assign the user story in the Kanban board to myself and move it to the In Progress column:
The implemented YAML file (.github/workflows/ci-build.yaml) for the Github Actions workflow:
Once the workflow has been implemented, the results are visible on Github under the Actions tab.
Here it shows that the build failed because I did not complete my Python linting:
Part of the CI user story is also the addition of a badge in the README.md, which shows the build status.
This also indicates that the build has failed:
After I fixed the linting problems, the CI workflow I created also works:
This completes all the acceptance criteria of the CI user story and the Kanban board can be updated.
The CI user story was moved to the Done column and at the same time the next user story (Need to Add Security Headers and CORS Policies) was moved to the In Progress column:
Time to implement the next user story.
The next step is to increase the security of the microservice.
First, security headers are implemented with the help of Flask Talisman.
Flask Talisman forces the REST API clients to use the HTTPS protocol.
Following the TDD approach, the test cases to be fulfilled were defined first:
Regarding the options / values of the headers:
More information can be found in the Flask documentation or here: https://github.com/GoogleCloudPlatform/flask-talisman
To fulfill the tests, Flask Talisman dependency was installed and a Talisman instance was created after the Flask app instantiation.
The result is that all our previous tests fail... at least our newly written security unit test works ;-)
The reason for the failure is that Talisman enforces HTTPS - this is good in the production system, but not in testing, as HTTP is used here.
Therefore, the HTTPS enforcement is switched off in the test_XXXX.py files.
As a result, all our tests work again (including the newly written security unit test):
We can test the security headers with the following command:
curl -I localhost:5000
Before the security headers were added:
After the security headers have been added:
The options such as X-Frame-Options or Content-Security-Policy are included - everything works as intended.
The status code is 302 FOUND instead of 200 OK, as the curl command searches for HTTP by default but finds / redirects to HTTPS (see Location in header).
Now the second part of the security user story: Adding CORS policies.
Following the TDD approach, the test cases to be fulfilled were defined first:
To fulfill the tests, Flask CORS dependency was installed and a CORS instance was created after the Flask app instantiation.
The result: All tests were successful:
We can test the CORS policies with the following command:
curl -I localhost:5000
The CORS policy is now also displayed (see red marking):
The Security user story in the Kanban Board is moved to the Done column:
This ends Sprint 2 and we can start with the next task (Deploy the Application to Kubernetes).
In Task 4, a new scenario was added (defined by IBM):
Management has been very pleased with the changes you have been making.
It's now time to create a sprint plan to implement the last two stories in your Product Backlog,
which are "Containerize your microservice using Docker" and "Deploy your Docker image to Kubernetes."
One more thing. There is a new requirement.
You did such a great job automating the CI pipeline with GitHub Actions that all of the
developers seem much happier because of it. Management has decided that if a little automation
is good, then more automation would be better. They would like you to automate the deployment
to Kubernetes using Tekton once you have figured out how to do it manually.
One new user stories were created to fulfill the requirements.
The content was specified by IBM:
As Sprint 2 is complete, Sprint 3 is also planned. The newly created user story is added here.
The updated Kanban Board / Sprint Plan 3:
These stories are now being implemented.
The user story Containerize microservice using Docker was moved to the Progress column and assigned to me.
The updated Kanban board:
An image is required to create a container.
And to create an image, a Dockerfile is required.
Therefore, the Dockerfile is implemented first:
I would not have thought of certain commands and they were specified by IBM.
These include the use of the option --no-cache-dir and the following lines, for example:
RUN useradd --uid 1000 theia && chown -R theia /app
USER theia
The Docker image is then built and the repository is tagged as accounts with the following command:
docker build -t accounts .
Check whether an image has been created with the following command:
docker images
The output which looks good:
A container was then created using the image with the following command:
docker run --rm \
--link postgresql \
-e DATABASE_URI=postgresql://postgres:postgres@postgresql:5432/postgres \
-p 8080:8080 \
accounts
Explanation (see Docker documentation as well):
--rm= Remove container when it exists--link postgresql= Link to another container (for using PostgreSQL database)-e DATABASE_URI=postgresql://postgres:postgres@postgresql:5432/postgres= Environment variable-p 8080:8080= Publish container's port to hostaccounts= Name of container image
The application is started again using the Launch Application function from the IBM Cloud IDE.
The output:
The image is then tagged and pushed to the IBM Cloud Registry with the following command:
docker tag accounts us.icr.io/$SN_ICR_NAMESPACE/accounts:1
docker push us.icr.io/$SN_ICR_NAMESPACE/accounts:1
$SN_ICR_NAMESPACE is an environment variable already predefined by IBM Cloud IDE and refers to my account:
The push is then checked with the following command:
ibmcloud cr images
The output:
The image is there and so everything fits.
The user story (Containerize microservice using Docker) is now fully implemented and the next user story (Deploy your Docker image to Kubernetes) can be tackled.
The updated Kanban board:
Manifests / YAML files must be created for the user story Deploy your Docker image to Kubernetes so that the microservice can be deployed consistently.
For the time being, the microservice is deployed manually.
It will be deployed automatically in Task 5 - Building an automated CD DevOps Pipeline.
The manifests can then be reused.
The PostgreSQL database is needed for the application.
OpenShift provides a number of templates for creating services.
IBM has already predefined the template (file postgresql-ephemeral-template.json).
The resources are created and deployed using the template with the following commands:
oc create -f postgresql-ephemeral-template.json
oc new-app postgresql-ephemeral
With the command oc get all we can see that the Postgres service is running:
The manifests / YAML files can now be created.
IBM provides the tip that you can write the definition of the deployment in a YAML file with the help of the flags --dry-run=client (= ensures that nothing is actually created) and --output=yaml.
IBM also specifies that the image created earlier should be used in the IBM Cloud Registry and three replicas.
I found more information with the --help command:
The resulting command:
oc create deployment accounts \
--dry-run=client \
--output=yaml > deploy/deployment.yaml \
--image=us.icr.io/sn-labs-christians21/accounts:1 \
--replicas=3
The output / YAML-file (deploy/deployment.yaml):
After applying the deployment to the cluster:
To access the postgres database, according to IBM the following environment variables are needed:
- DATABASE_HOST
- DATABASE_NAME
- DATABASE_USER
- DATABASE_PASSWORD
A secret for Postgres was also created using the service template.
It contains the names of the variables that are inserted into deployment.yaml as environment variables.
The command oc describe secret postgresql was used to get the information.
The result:
The file deployment.yaml was then applied to the cluster again with the command oc create -f deploy/deployment.yaml.
A service object was created in order to be able to use the service from outside.
Here, the definition was also written with a command in a YAML file. The command:
oc expose deploy accounts \
--dry-run=client \
--output=yaml > deploy/service.yaml \
--port=8080 \
--type=NodePort
The result:
After applying the file deploy/service.yaml to the cluster:
A route object was created to obtain the URL of the service using the following command:
oc create route edge accounts --service=accounts
The result with the command oc get routes (URL is marked red):
If you enter the URL in your browser, our service will appear:
Everything works.
This means that manual deploying with Kubernetes / OpenShift is done and the Kanban board can be updated.
The next user story can be implemented.
The detailed view of the last user story:
Here is an overview of the related tasks in the pipeline:
First, a storage / workspace (PersistentVolumeClaim, PVC) was set up for the pipeline and the pipeline itself with the following commands:
oc create -f tekton/pvc.yaml
oc apply -f tekton/tasks.yaml
oc apply -f tekton/pipeline.yaml
Verification that everything has been created as intended:
Part of the pipeline has already been implemented. The following tasks:
- init
- clone
See screenshot of tekton/pipeline.yaml below as well:
The task has already been defined in the pipeline: git-clone.
This does not have to be written in tekton/tasks.yaml itself, because a predefined task already exists in the Tekton Hub.
This is installed in the cluster with the following command:
tkn hub install task git-clone
Verification of the installation of task git-clone:
The pipeline is now started in order to see the output.
The following command is used:
tkn pipeline start cd-pipeline \
-p repo-url="https://github.com/christian-schw/devops-capstone-project.git" \
-p branch="main" \
-w name=pipeline-workspace,claimName=pipelinerun-pvc \
-s pipeline \
--showlog
Use option -h for more information on passing the values for PVC etc..
The value of branch can be exchanged for test purposes (e. g. cd-pipeline instead of main).
The result: the pipeline succeeded.
The next task is lint with Flake8.
This does not have to be written in tekton/tasks.yaml itself, because a predefined task already exists in the Tekton Hub.
This is installed in the cluster with the following command:
tkn hub install task flake8
Verification of the installation of task flake8:
The task is built into tekton/pipeline.yaml, applied with the command oc apply -f tekton/pipeline.yaml and the pipeline is restarted with the following command:
tkn pipeline start cd-pipeline \
-p repo-url="https://github.com/christian-schw/devops-capstone-project.git" \
-p branch="main" \
-w name=pipeline-workspace,claimName=pipelinerun-pvc \
-s pipeline \
--showlog
The logs:
As you can see, the pipeline failed because I didn't do my linting correctly...
After I fixed my linting problem, the pipeline works:
The next task is tests with nose.
This time there is no predefined task in the Tekton Hub.
We have to create it ourselves.
The definition is in tekton/tasks.yaml. The implemented code:
The task was then added to the pipeline (tekton/pipeline.yaml):
The two changes were then added to the cluster:
oc apply -f tekton/tasks.yaml
oc apply -f tekton/pipeline.yaml
Then start the pipeline again to see the results of the tests task:
tkn pipeline start cd-pipeline \
-p repo-url="https://github.com/christian-schw/devops-capstone-project.git" \
-p branch="main" \
-w name=pipeline-workspace,claimName=pipelinerun-pvc \
-s pipeline \
--showlog
Everything fits. Now the next task in the pipeline: build.
This is required to build the image.
There is a task for this in the Tekton Hub: buildah.
It does not need to be installed separately as it has already been installed as a ClusterTask.
ClusterTasks are not only available to a single pipeline, but to several.
With the command tkn clustertask ls you can see all ClusterTasks and the buildah task is listed:
The build task (with reference to buildah) was then integrated into the pipeline and the changes applied with command oc apply -f tekton/pipeline.yaml:
Start the pipeline again - this time with an additional parameter (build-image):
tkn pipeline start cd-pipeline \
-p repo-url="https://github.com/christian-schw/devops-capstone-project.git" \
-p branch="main" \
-p build-image="image-registry.openshift-image-registry.svc:5000/$SN_ICR_NAMESPACE/accounts:1" \
-w name=pipeline-workspace,claimName=pipelinerun-pvc \
-s pipeline \
--showlog
Everything works:
Now comes the last task: deploy.
There is a task for this in the Tekton Hub: openshift-client.
It does not need to be installed separately as it has already been installed as a ClusterTask.
Command tkn clustertask ls:
The deploy task (with reference to openshift-client) was then integrated into the pipeline and the changes applied with command oc apply -f tekton/pipeline.yaml:
Start the pipeline again:
tkn pipeline start cd-pipeline \
-p repo-url="https://github.com/christian-schw/devops-capstone-project.git" \
-p branch="main" \
-p build-image="image-registry.openshift-image-registry.svc:5000/$SN_ICR_NAMESPACE/accounts:1" \
-w name=pipeline-workspace,claimName=pipelinerun-pvc \
-s pipeline \
--showlog
The logs:
The user story is complete, Sprint 3 is finished and the Kanban board has been updated:
That completes the project. If you have read this far, thank you very much for your attention! :-)
Important: This project is designed to be executed in the IBM Developer Skills Network Cloud IDE with OpenShift.
Run the following command after cloning the repository (Note: DO NOT run this program as a bash script. It sets environment variable and so must be sourced):
source bin/setup.shThis will install Python 3.9, make it the default, modify the bash prompt, create a Python virtual environment and activate it.
After sourcing it, the prompt should look like this:
(venv) theia:project$Under normal circumstances you should not have to run these commands.
They are performed automatically at setup but may be useful when things go wrong:
Activate the Python 3.9 environment with:
source ~/venv/bin/activateThese dependencies are installed as part of the setup process but should you need to install them again, first make sure that the Python 3.9 virtual environment is activated and then use the make install command:
make installThis project uses Postgres running in a Docker container.
If for some reason the service is not available you can start it with:
make dbYou can use the docker ps command to make sure that postgres is up and running.
The code for the microservice is contained in the service package. All of the test are in the tests folder.
The code follows the Model-View-Controller pattern with all of the database code and business logic in the model (models.py), and all of the RESTful routing on the controller (routes.py).
├── service <- microservice package
│ ├── common/ <- common log and error handlers
│ ├── config.py <- Flask configuration object
│ ├── models.py <- code for the persistent model
│ └── routes.py <- code for the REST API routes
├── setup.cfg <- tools setup config
└── tests <- folder for all of the tests
├── factories.py <- test factories
├── test_cli_commands.py <- CLI tests
├── test_models.py <- model unit tests
└── test_routes.py <- route unit tests
The Account model contains the following fields:
| Name | Type | Optional |
|---|---|---|
| id | Integer | False |
| name | String(64) | False |
| String(64) | False | |
| address | String(256) | False |
| phone_number | String(32) | True |
| date_joined | Date | False |
This repo can also be used for local Kubernetes development.
It is not advised that you run these commands in the Cloud IDE environment.
The purpose of these commands are to simulate the Cloud IDE environment locally on your computer.
At a minimum, you will need Docker Desktop installed on your computer.
For the full development environment, you will also need Visual Studio Code with the Remote Containers extension from the Visual Studio Marketplace.
All of these can be installed manually by clicking on the links above or you can use a package manager like Homebrew on Mac of Chocolatey on Windows.
Please only use these commands for working stand-alone on your own computer with the VSCode Remote Container environment provided.
-
Bring up a local K3D Kubernetes cluster
$ make cluster
-
Install Tekton
$ make tekton
-
Install the ClusterTasks that the Cloud IDE has
$ make clustertasks
You can now perform Tekton development locally, just like in the Cloud IDE lab environment.
If you have any questions, please feel free to reach out via email: christian-schwanse (at) gmx.net





































































































