Skip to content
This repository was archived by the owner on Dec 18, 2024. It is now read-only.

Capstone Project of IBM Program: "DevOps and Software Engineering" - Developing an Account Microservice to keep track of Customers on E-Commerce Website

License

Notifications You must be signed in to change notification settings

christian-schw/devops-capstone-project

Repository files navigation

Capstone Project - DevOps and Software Engineering

Reflection of the build status:

Build Status

Table of Contents
  1. Introduction
  2. Course Information
  3. Information about the Project
  4. What I have done as Part of the Project
  5. Getting Started
  6. Contact

Introduction

This capstone project / repository was created as part of IBM's DevOps and Software Engineering program.
A template was used - see IBM repository: https://github.com/ibm-developer-skills-network/aolwx-devops-capstone-template.
Many thanks to the IBM course team as well as John Rofrano (Primary Instructor) and all the other contributors!

(back to top)



Scenario

You have been asked by the customer account manager at your company to
develop an account microservice to keep track of the customers
on your e-commerce website. Since it is a microservice, it is expected
to have a well-formed REST API that other microservices can call.
This service initially needs to create, read, update, delete,
and list customers.

You have also been told that someone else has started on this task.
They have already developed the database model and a Python
Flask-based REST API with an endpoint to create a customer account.

Tasks that need to be completed:
- Create and execute sprint plans
- Develop a RESTful Service using Test Driven Development (TDD)
- Add Continuous Integration (CI) and Security to the Repository
- Deploy the application to Kubernetes
- Build an automated CD DevOps Pipeline

(back to top)



Preview Images

Preview images of the project:

planning-kanban-done

17 create error handler py and implement unit tests

21 demo create an account terminal

5 part 1 implementing ci build yaml

6 failed build - linting flake8

17 output after adding Flask CORS

13 oc implementing ibm tip

12 pipeline succeeded task nosetests

(back to top)



Course Information

Title: DevOps and Software Engineering
Type: Capstone Project
Course Provider: IBM

(back to top)



Information about the Project

General

  • Client: Myself
  • Project Goal: Demonstrate the knowledge gained in the IBM Program DevOps and Software Engineering with a Capstone Project.
  • Number of Project Participants: 1 (Cloned repository of IBM. Developed the rest on my own)
  • Time Period: December, 2024
  • Industry / Area: DevOps, Software Engineering
  • Role: Developer
  • Languages: English
  • Result: Successfully built an account microservice. Demonstrated skills and acquired new knowledge.

Tech Stack

With regard to my role:

  • GitHub (Version Control, Kanban Board, Actions, ...)
  • IBM Cloud IDE (based on Theia and Container)
  • Programming Language: Python
  • Python Webframework: Flask
  • Python Unittest Framework: nose
  • Python Linting: Flake8
  • Containerization: Docker
  • Container Orchestration: Kubernetes / OpenShift
  • Image Registry: IBM Cloud Container Registry
  • CI/CD Tool: Tekton

(back to top)



What I have done as Part of the Project

Task 1 - Creating and executing Sprint Plans

The RESTful microservice is created with the help of an agile plan (Scrum).
This means, the first task is the Sprint 0.

The main goal of Sprint 0 is to set up the team for future delivery by creating the basic project skeleton, defining the vision and preparing the product backlog.

First, a user story template was created (can be found in: .github/ISSUE_TEMPLATE):

planning-storytemplate-done

The template provides the basis for the user stories to be created for the sprints.
It uses the Gherkin Syntax.
Gherkin is a simple description language with very few rules for the structured formulation of scenarios in the context of behavior-driven software development according to BDD principles.

Next, the user stories were created.
The titles were provided by IBM as part of the project (e.g. Update an account in the service) and I filled them with content.
Two examples (Note: Screenshots were taken later, therefore they are already labeled and assigned to a project):

Issue Update an account in the service

Issue Containerize microservice using docker

The acceptance criteria define the status of Done.

After the user stories were completed, a GitHub project (Kanban board) was created.
All issues were assigned to the New Issues column:

planning-userstories-done

The issues were then moved either to the icebox or the product backlog, depending on their priority.
For example, deploying is one of the last steps, which is why it ended up in the icebox.

The priorities of the issues in the backlog have also been defined.
P0 is the highest priority and is therefore listed at the top.
P2 is the lowest priority and is therefore listed at the bottom.

planning-labels-done

The following was also assigned to the issues at the end:

  • An iteration field Sprint which has a duration of 1 week. At this point, the date is 14/12/2024. This means that the 1st sprint starts on 14/12/2024 and ends on 20/12/2024.
  • The size and estimated story points were also determined. The scale (provided by IBM) is 3, 5, 8, 13 = S, M, L, XL.

The issues were moved from the product backlog to the sprint backlog and the result is as follows:

planning-kanban-done

Task 1 is finished. Task 2 can be started.

(back to top)



Task 2 - Developing a RESTful Service using Test Driven Development

In this task, the REST API is expanded to include additional endpoints.

Test Driven Development is used in this project.
This means that the tests are always written first and then the actual code that is to fulfill the tests.
The following rule applies: Code coverage must be at least 95 %.

First, an overview of the API guidelines.
Development is based on these.

(back to top)



REST API Guidelines

The REST API guidelines were specified by IBM:

1 REST API Guidelines

(back to top)



Setting up the Development Environment

Something is missing in the set-up of the development environment: Configuring the nosetests command with additional options.
This will save us typing work when carrying out unit tests in the future.

The modified setup.cfg file:

2 setup dev environment nosetests

This completes the user story for setting up the development environment and the Kanban board is updated:

3 user story complete and next user story read an account

At the same time, the next user story was defined: Read an account from service (see column In Progress).

(back to top)



Implementing API Endpoint - Read an Account

Following the TDD approach, the test cases to be fulfilled were defined first:

4 implement test for read an account

The code was then written to fulfill the tests.
The code coverage is more than 95%.
This means that everything fits.

5 implement read an account function

At the same time, the next user story was defined: Update an account from service (see column In Progress):

6 user story complete and next user story update an account

(back to top)



Implementing API Endpoint - Update an Account

Following the TDD approach, the test cases to be fulfilled were defined first:

7 part 1 implement test for update an account

7 part 2 implement test for update an account

The code was then written to fulfill the tests.
The code coverage is more than 95%.
This means that everything fits.

8 implement update an account function

At the same time, the next user story was defined: Delete an account from service (see column In Progress):

update-accounts

(back to top)



Implementing API Endpoint - Delete an Account

Following the TDD approach, the test cases to be fulfilled were defined first:

10 implement test for delete an account

The code was then written to fulfill the tests.
The code coverage is more than 95%.
This means that everything fits.

11 implement delete an account

At the same time, the next user story was defined: List all accounts from service (see column In Progress):

12 user story complete and next user story list all accounts

(back to top)



Implementing API Endpoint - List all Accounts

Following the TDD approach, the test cases to be fulfilled were defined first:

13 implement test for list all accounts

The code was then written to fulfill the tests.
The code coverage is more than 95%.
This means that everything fits.

14 implement list all accounts

The Kanban Board has been updated:

15 user story complete list all accounts

As an additional task, the total code coverage is being improved.

(back to top)



Improving Total Code Coverage

The total code coverage is currently below 95 %.
There is a lot of potential for improvement in some areas (e.g. error_handlers.py file):

16 improve code coverage error handlers potencial

A new test file was created to test the error handlers: tests/test_error_handlers.py.
The test suite for error handlers was set up (setup and teardown) and the unit tests were written:

17 create error handler py and implement unit tests

The goal was achieved: The total code coverage is now above 95 %.

18 total code coverage above 95 percent

Now comes the last step of the task: Demonstrating the REST API.

(back to top)



Demonstration of the REST API

First, local access to the service is enabled.
The following command is used to refresh the database:

flask db-create

Then use the command below to start the service with the new database:

make run

The terminal output:

19 start service with new database terminal

The application is started using the Launch Application function in the IBM Cloud IDE.
A port number is also required to start the application.

19 5 launch application in ide

According to the terminal, the application listens to port 5000, so port 5000 is entered / required.
The output:

20 start service with new database application

The service is now running.
The curl command is used to make REST calls to the implemented endpoints.

The demonstration of Create an Account endpoint using the following command:

curl -i -X POST http://127.0.0.1:5000/accounts \
-H "Content-Type: application/json" \
-d '{"name":"John Doe","email":"john@doe.com","address":"123 Main St.","phone_number":"555-1212"}'

21 demo create an account terminal

The demonstration of List all Accounts endpoint using the following command:

curl -i -X GET http://127.0.0.1:5000/accounts

22 demo list all accounts terminal

The demonstration of Read an Account endpoint using the following command:

curl -i -X GET http://127.0.0.1:5000/accounts/1

23 demo read an account terminal

The demonstration of Update an Account endpoint using the following command:

curl -i -X PUT http://127.0.0.1:5000/accounts/1 \
-H "Content-Type: application/json" \
-d '{"name":"John Doe","email":"john@doe.com","address":"123 Main St.","phone_number":"555-1111"}'

24 demo update an account terminal

The difference: The phone number ends now with a 1 instead of 2 (555-1112 -> 555-1111).

The demonstration of Delete an Account endpoint using the following command:

curl -i -X DELETE http://127.0.0.1:5000/accounts/1

25 demo delete an account terminal

After deletion, all accounts were displayed to show that the account had actually been deleted.
The list is empty, so the account has been deleted.

The REST API has been finished, Sprint 1 is now complete and the next task can now be started: Task 3 - Add Continuous Integration and Security to the Repository.

(back to top)



Task 3 - Adding Continuous Integration and Security to the Repository

Additional Scenario and Planning Sprint 2

In Task 3, a new scenario was added (defined by IBM):

Management has been looking for ways to increase developer productivity
and has noticed that developers spend a lot of time checking
that all the tests pass before approving each pull request.
Management has decided it is time to automate this task
by implementing continuous integration (CI) using GitHub Actions.

There have also been many stories in the news about security breaches
and exploits, and management is concerned about the security of your microservice.
In an effort to be proactive, they have decided that you need to
add defensive security measures to your microservice in the
form of security headers and cross-origin resource sharing (CORS) policies.

Two new user stories were created to fulfill the requirements.
This time, these were specified by IBM.
As Sprint 1 is complete, Sprint 2 is also planned. The newly created user stories are added here.

Note: And yes... the week of Sprint 1 hasn't actually passed ;-)
Currently on vacation from work, so lots of time. And I want to finish the project before Christmas (day I'm writing this: 16/12/2024).

The first user story - Need the Ability to Automate Continuous Integration Checks:

1 Feature CI Checks Issue

The second user story - Need to Add Security Headers and CORS Policies:

2 Feature Security Headers and CORS Issue

The updated Kanban Board / Sprint Plan 2:

3 Sprint2 Plan

These stories are now being implemented.

(back to top)



Implementing Continuous Integration Automation

A key practice in DevOps is Continuous Integration (CI), where developers continuously integrate their code into the main branch by making frequent pull requests.
To make life easier for developers, a CI pipeline is now being implemented with the help of GitHub Actions.

I assign the user story in the Kanban board to myself and move it to the In Progress column:

4 CI in Progress Kanban Board and assign to myself

The implemented YAML file (.github/workflows/ci-build.yaml) for the Github Actions workflow:

5 part 1 implementing ci build yaml

5 part 2 implementing ci build yaml

Once the workflow has been implemented, the results are visible on Github under the Actions tab.
Here it shows that the build failed because I did not complete my Python linting:

6 failed build - linting flake8

Part of the CI user story is also the addition of a badge in the README.md, which shows the build status.
This also indicates that the build has failed:

6 part 2 failed build badge readme md

After I fixed the linting problems, the CI workflow I created also works:

7 part 1 successful build

7 part 2 passing build bage readme md

This completes all the acceptance criteria of the CI user story and the Kanban board can be updated.
The CI user story was moved to the Done column and at the same time the next user story (Need to Add Security Headers and CORS Policies) was moved to the In Progress column:

9 move security user story to in progress

Time to implement the next user story.

(back to top)



Implementing Security Headers and CORS Policies

The next step is to increase the security of the microservice.

First, security headers are implemented with the help of Flask Talisman.
Flask Talisman forces the REST API clients to use the HTTPS protocol.

Following the TDD approach, the test cases to be fulfilled were defined first:

11 part 1 implement tests security header

11 part 2 implement tests security header

Regarding the options / values of the headers:
More information can be found in the Flask documentation or here: https://github.com/GoogleCloudPlatform/flask-talisman

To fulfill the tests, Flask Talisman dependency was installed and a Talisman instance was created after the Flask app instantiation.
The result is that all our previous tests fail... at least our newly written security unit test works ;-)

12 adding Talisman tests failed

The reason for the failure is that Talisman enforces HTTPS - this is good in the production system, but not in testing, as HTTP is used here.
Therefore, the HTTPS enforcement is switched off in the test_XXXX.py files.
As a result, all our tests work again (including the newly written security unit test):

13 disable https when testing

We can test the security headers with the following command:

curl -I localhost:5000

Before the security headers were added:

10 output before adding Flask Talisman security headers

After the security headers have been added:

14 output after adding Flask Talisman security headers

The options such as X-Frame-Options or Content-Security-Policy are included - everything works as intended.
The status code is 302 FOUND instead of 200 OK, as the curl command searches for HTTP by default but finds / redirects to HTTPS (see Location in header).

Now the second part of the security user story: Adding CORS policies.

Following the TDD approach, the test cases to be fulfilled were defined first:

15 implement tests cors policies header

To fulfill the tests, Flask CORS dependency was installed and a CORS instance was created after the Flask app instantiation.
The result: All tests were successful:

16 implement cors policies tests successful

We can test the CORS policies with the following command:

curl -I localhost:5000

The CORS policy is now also displayed (see red marking):

17 output after adding Flask CORS

The Security user story in the Kanban Board is moved to the Done column:

18 updated kanban board moved security user story

This ends Sprint 2 and we can start with the next task (Deploy the Application to Kubernetes).

(back to top)



Task 4 - Deploying the Application to Kubernetes

Additional Scenario and Planning Sprint 3

In Task 4, a new scenario was added (defined by IBM):

Management has been very pleased with the changes you have been making.
It's now time to create a sprint plan to implement the last two stories in your Product Backlog,
which are "Containerize your microservice using Docker" and "Deploy your Docker image to Kubernetes."

One more thing. There is a new requirement.
You did such a great job automating the CI pipeline with GitHub Actions that all of the
developers seem much happier because of it. Management has decided that if a little automation
is good, then more automation would be better. They would like you to automate the deployment
to Kubernetes using Tekton once you have figured out how to do it manually.

One new user stories were created to fulfill the requirements.
The content was specified by IBM:

1 new user story automate deployment

As Sprint 2 is complete, Sprint 3 is also planned. The newly created user story is added here.

The updated Kanban Board / Sprint Plan 3:

2 Updated Kanbanboard Sprint 3

These stories are now being implemented.

(back to top)



Containerizing the Microservice using Docker

The user story Containerize microservice using Docker was moved to the Progress column and assigned to me.
The updated Kanban board:

3 assign container user story to myself and progress column

An image is required to create a container.
And to create an image, a Dockerfile is required.
Therefore, the Dockerfile is implemented first:

4 Create Dockerfile

I would not have thought of certain commands and they were specified by IBM.
These include the use of the option --no-cache-dir and the following lines, for example:

RUN useradd --uid 1000 theia && chown -R theia /app
USER theia

The Docker image is then built and the repository is tagged as accounts with the following command:

docker build -t accounts .

Check whether an image has been created with the following command:

docker images

The output which looks good:

5 check docker images

A container was then created using the image with the following command:

docker run --rm \
    --link postgresql \
    -e DATABASE_URI=postgresql://postgres:postgres@postgresql:5432/postgres \
    -p 8080:8080 \
    accounts

Explanation (see Docker documentation as well):

  • --rm = Remove container when it exists
  • --link postgresql = Link to another container (for using PostgreSQL database)
  • -e DATABASE_URI=postgresql://postgres:postgres@postgresql:5432/postgres = Environment variable
  • -p 8080:8080 = Publish container's port to host
  • accounts = Name of container image

The application is started again using the Launch Application function from the IBM Cloud IDE.
The output:

6 docker run and launch application

7 output application

The image is then tagged and pushed to the IBM Cloud Registry with the following command:

docker tag accounts us.icr.io/$SN_ICR_NAMESPACE/accounts:1
docker push us.icr.io/$SN_ICR_NAMESPACE/accounts:1

$SN_ICR_NAMESPACE is an environment variable already predefined by IBM Cloud IDE and refers to my account:

8 output env var NAMESPACE

The push is then checked with the following command:

ibmcloud cr images

The output:

9 check ibmcloud container registry

The image is there and so everything fits.

The user story (Containerize microservice using Docker) is now fully implemented and the next user story (Deploy your Docker image to Kubernetes) can be tackled.
The updated Kanban board:

10 move user stories after finishing containerizing

(back to top)



Deploying to Kubernetes

Manifests / YAML files must be created for the user story Deploy your Docker image to Kubernetes so that the microservice can be deployed consistently.
For the time being, the microservice is deployed manually.
It will be deployed automatically in Task 5 - Building an automated CD DevOps Pipeline.
The manifests can then be reused.

The PostgreSQL database is needed for the application.
OpenShift provides a number of templates for creating services.
IBM has already predefined the template (file postgresql-ephemeral-template.json).

The resources are created and deployed using the template with the following commands:

oc create -f postgresql-ephemeral-template.json
oc new-app postgresql-ephemeral

With the command oc get all we can see that the Postgres service is running:

11 create postgres ephemeral and pod is running

The manifests / YAML files can now be created.
IBM provides the tip that you can write the definition of the deployment in a YAML file with the help of the flags --dry-run=client (= ensures that nothing is actually created) and --output=yaml.
IBM also specifies that the image created earlier should be used in the IBM Cloud Registry and three replicas.

I found more information with the --help command:

12 oc --help tip from ibm

The resulting command:

oc create deployment accounts \
    --dry-run=client \
    --output=yaml > deploy/deployment.yaml \
    --image=us.icr.io/sn-labs-christians21/accounts:1 \
    --replicas=3

The output / YAML-file (deploy/deployment.yaml):

13 oc implementing ibm tip

After applying the deployment to the cluster:

14 oc applying deployment yaml

To access the postgres database, according to IBM the following environment variables are needed:

  • DATABASE_HOST
  • DATABASE_NAME
  • DATABASE_USER
  • DATABASE_PASSWORD

A secret for Postgres was also created using the service template.
It contains the names of the variables that are inserted into deployment.yaml as environment variables.
The command oc describe secret postgresql was used to get the information.
The result:

14 oc implementing env vars secret

The file deployment.yaml was then applied to the cluster again with the command oc create -f deploy/deployment.yaml.

A service object was created in order to be able to use the service from outside.
Here, the definition was also written with a command in a YAML file. The command:

oc expose deploy accounts \
   --dry-run=client \
   --output=yaml > deploy/service.yaml \
   --port=8080 \
   --type=NodePort

The result:

15 expose service yaml with dry run

After applying the file deploy/service.yaml to the cluster:

16 check service

A route object was created to obtain the URL of the service using the following command:

oc create route edge accounts --service=accounts

The result with the command oc get routes (URL is marked red):

17 create route and copy url

If you enter the URL in your browser, our service will appear:

18 url route output

Everything works.
This means that manual deploying with Kubernetes / OpenShift is done and the Kanban board can be updated.
The next user story can be implemented.

19 update kanban board move next user story to progress

(back to top)



Task 5 - Building an automated CD DevOps Pipeline

The detailed view of the last user story:

1 user story cd pipeline

Here is an overview of the related tasks in the pipeline:

2 overview of pipeline tasks

First, a storage / workspace (PersistentVolumeClaim, PVC) was set up for the pipeline and the pipeline itself with the following commands:

oc create -f tekton/pvc.yaml
oc apply -f tekton/tasks.yaml
oc apply -f tekton/pipeline.yaml

Verification that everything has been created as intended:

3 verifying tasks pvc and pipeline

Part of the pipeline has already been implemented. The following tasks:

  • init
  • clone

See screenshot of tekton/pipeline.yaml below as well:

4 initial pipeline tasks

The task has already been defined in the pipeline: git-clone.
This does not have to be written in tekton/tasks.yaml itself, because a predefined task already exists in the Tekton Hub.
This is installed in the cluster with the following command:

tkn hub install task git-clone

Verification of the installation of task git-clone:

5 verifying installation git-clone

The pipeline is now started in order to see the output.
The following command is used:

tkn pipeline start cd-pipeline \
    -p repo-url="https://github.com/christian-schw/devops-capstone-project.git" \
    -p branch="main" \
    -w name=pipeline-workspace,claimName=pipelinerun-pvc \
    -s pipeline \
    --showlog

Use option -h for more information on passing the values for PVC etc..
The value of branch can be exchanged for test purposes (e. g. cd-pipeline instead of main).
The result: the pipeline succeeded.

6 starting pipeline and verifying succeeded

The next task is lint with Flake8.
This does not have to be written in tekton/tasks.yaml itself, because a predefined task already exists in the Tekton Hub.
This is installed in the cluster with the following command:

tkn hub install task flake8

Verification of the installation of task flake8:

7 verifying installation flake8 task

The task is built into tekton/pipeline.yaml, applied with the command oc apply -f tekton/pipeline.yaml and the pipeline is restarted with the following command:

tkn pipeline start cd-pipeline \
    -p repo-url="https://github.com/christian-schw/devops-capstone-project.git" \
    -p branch="main" \
    -w name=pipeline-workspace,claimName=pipelinerun-pvc \
    -s pipeline \
    --showlog

The logs:

8 implementing lint and failed linting

As you can see, the pipeline failed because I didn't do my linting correctly...
After I fixed my linting problem, the pipeline works:

9 fixing linting and pipeline succeeded

The next task is tests with nose.
This time there is no predefined task in the Tekton Hub.
We have to create it ourselves.
The definition is in tekton/tasks.yaml. The implemented code:

10 implementing task nosetests

The task was then added to the pipeline (tekton/pipeline.yaml):

11 implementing pipeline task nosetests

The two changes were then added to the cluster:

oc apply -f tekton/tasks.yaml
oc apply -f tekton/pipeline.yaml

Then start the pipeline again to see the results of the tests task:

tkn pipeline start cd-pipeline \
    -p repo-url="https://github.com/christian-schw/devops-capstone-project.git" \
    -p branch="main" \
    -w name=pipeline-workspace,claimName=pipelinerun-pvc \
    -s pipeline \
    --showlog

12 pipeline succeeded task nosetests

Everything fits. Now the next task in the pipeline: build.
This is required to build the image.
There is a task for this in the Tekton Hub: buildah.

It does not need to be installed separately as it has already been installed as a ClusterTask.
ClusterTasks are not only available to a single pipeline, but to several.

With the command tkn clustertask ls you can see all ClusterTasks and the buildah task is listed:

13 clustertask buildah

The build task (with reference to buildah) was then integrated into the pipeline and the changes applied with command oc apply -f tekton/pipeline.yaml:

14 part 1 implementing buildah task to pipeline

14 part 2 implementing buildah task to pipeline

Start the pipeline again - this time with an additional parameter (build-image):

tkn pipeline start cd-pipeline \
    -p repo-url="https://github.com/christian-schw/devops-capstone-project.git" \
    -p branch="main" \
    -p build-image="image-registry.openshift-image-registry.svc:5000/$SN_ICR_NAMESPACE/accounts:1" \
    -w name=pipeline-workspace,claimName=pipelinerun-pvc \
    -s pipeline \
    --showlog

Everything works:

15 pipeline succeeded task buildah

Now comes the last task: deploy.
There is a task for this in the Tekton Hub: openshift-client.
It does not need to be installed separately as it has already been installed as a ClusterTask.
Command tkn clustertask ls:

16 clustertask openshift-client

The deploy task (with reference to openshift-client) was then integrated into the pipeline and the changes applied with command oc apply -f tekton/pipeline.yaml:

17 implementing pipeline task deploy

17 part 2 implementing pipeline task deploy

Start the pipeline again:

tkn pipeline start cd-pipeline \
    -p repo-url="https://github.com/christian-schw/devops-capstone-project.git" \
    -p branch="main" \
    -p build-image="image-registry.openshift-image-registry.svc:5000/$SN_ICR_NAMESPACE/accounts:1" \
    -w name=pipeline-workspace,claimName=pipelinerun-pvc \
    -s pipeline \
    --showlog

The logs:

18 pipeline succeeded task deploy

The user story is complete, Sprint 3 is finished and the Kanban board has been updated:

19 updated kanban board

That completes the project. If you have read this far, thank you very much for your attention! :-)

(back to top)



Getting Started

Development Environment

Important: This project is designed to be executed in the IBM Developer Skills Network Cloud IDE with OpenShift.
Run the following command after cloning the repository (Note: DO NOT run this program as a bash script. It sets environment variable and so must be sourced):

source bin/setup.sh

This will install Python 3.9, make it the default, modify the bash prompt, create a Python virtual environment and activate it.
After sourcing it, the prompt should look like this:

(venv) theia:project$

(back to top)



Useful Commands

Under normal circumstances you should not have to run these commands.
They are performed automatically at setup but may be useful when things go wrong:

Activating the Python Virtual Environment

Activate the Python 3.9 environment with:

source ~/venv/bin/activate

Installing Python Dependencies

These dependencies are installed as part of the setup process but should you need to install them again, first make sure that the Python 3.9 virtual environment is activated and then use the make install command:

make install

Starting the Postgres Docker Container

This project uses Postgres running in a Docker container.
If for some reason the service is not available you can start it with:

make db

You can use the docker ps command to make sure that postgres is up and running.

(back to top)



Project Folder Layout

The code for the microservice is contained in the service package. All of the test are in the tests folder.
The code follows the Model-View-Controller pattern with all of the database code and business logic in the model (models.py), and all of the RESTful routing on the controller (routes.py).

├── service         <- microservice package
│   ├── common/     <- common log and error handlers
│   ├── config.py   <- Flask configuration object
│   ├── models.py   <- code for the persistent model
│   └── routes.py   <- code for the REST API routes
├── setup.cfg       <- tools setup config
└── tests                       <- folder for all of the tests
    ├── factories.py            <- test factories
    ├── test_cli_commands.py    <- CLI tests
    ├── test_models.py          <- model unit tests
    └── test_routes.py          <- route unit tests

(back to top)



Data Model - Account

The Account model contains the following fields:

Name Type Optional
id Integer False
name String(64) False
email String(64) False
address String(256) False
phone_number String(32) True
date_joined Date False

(back to top)



Local Kubernetes Development

This repo can also be used for local Kubernetes development.
It is not advised that you run these commands in the Cloud IDE environment.
The purpose of these commands are to simulate the Cloud IDE environment locally on your computer.

At a minimum, you will need Docker Desktop installed on your computer.
For the full development environment, you will also need Visual Studio Code with the Remote Containers extension from the Visual Studio Marketplace.
All of these can be installed manually by clicking on the links above or you can use a package manager like Homebrew on Mac of Chocolatey on Windows.

Please only use these commands for working stand-alone on your own computer with the VSCode Remote Container environment provided.

  1. Bring up a local K3D Kubernetes cluster

    $ make cluster
  2. Install Tekton

    $ make tekton
  3. Install the ClusterTasks that the Cloud IDE has

    $ make clustertasks

You can now perform Tekton development locally, just like in the Cloud IDE lab environment.

(back to top)



Contact

If you have any questions, please feel free to reach out via email: christian-schwanse (at) gmx.net

(back to top)

About

Capstone Project of IBM Program: "DevOps and Software Engineering" - Developing an Account Microservice to keep track of Customers on E-Commerce Website

Topics

Resources

License

Stars

Watchers

Forks