Skip to content

AlbertVilaCalvo/FullStack-Node-React-Recipe-REST

Repository files navigation

Recipe Manager

A Full Stack app built with Node, React, PostgreSQL, REST API, AWS, Kubernetes (EKS) and GitHub Actions

Live site: https://recipemanager.link

Technologies used

Tools

  • Hosted on AWS.
  • Infrastructure as Code with Terraform
  • CI/CD with GitHub Actions.
  • Local development with Docker Compose.
  • 100% TypeScript, zero JavaScript.

Frontend

Backend

  • Node.js server built with Express.js.
  • Database with PostgreSQL.
  • Data validation with zod.
  • Testing
    • Unit tests for functions with Jest.
    • Unit tests for route handlers and middleware with node-mocks-http.
    • Integration tests for routes with supertest.

AWS

  • Frontend deployed to S3 and CloudFront automatically using GitHub Actions.
  • EKS cluster for server deployment.
  • RDS PostgreSQL database.
  • ECR for Docker image storage.
  • Secrets Manager for application secrets (RDS master password, JWT secret).

Kubernetes (EKS)

  • Ingress with AWS Load Balancer Controller.
  • Karpenter for automatic provisioning of nodes based on workload.
  • ExternalDNS for automatic Route53 DNS record management.
  • Managed Node Group that runs CoreDNS, Load Balancer Controller, Karpenter controller and ExternalDNS.
  • Pod Identity.
  • Kustomize for managing Kubernetes manifests.

Features

  • Authentication: register, login, validate email, recover password.
  • Settings: change the user's name, email and password. Delete the user account.
  • Recipe: publish, edit and delete recipes.

Local development

The application is available at:

To run the app locally, do:

cp .env.local .env
# (Optional) Edit .env to adjust values

# Start all services
docker compose up --build

# (Optional) Seed the database with users and recipes
./scripts/local-development/seed-database.sh

# View service status
docker compose ps

# Stop everything, but keep the database data
docker compose down
# Stop everything and discard the database data
docker compose down --volumes

To run only a single service locally do:

cd server # Or cd web
npm install
npm run dev

Local database

The local PostgreSQL database is created automatically when you run docker compose up --build. You can interact with it from within the Docker container.

# Connect to database from within the Docker container
docker compose exec db psql -U postgres -d recipemanager

# Backup database
docker compose exec db pg_dump -U postgres recipemanager > backup.sql

# Restore database
docker compose exec -T db psql -U postgres -d recipemanager < backup.sql

The database port is exposed to localhost:5432 on the host machine. This allows you to connect to the database using a client from your machine.

# Connect to database from your host machine
psql -h localhost -p 5432 -U postgres -d recipemanager

You will be prompted for the password, which is defined in your .env file.

Seed the local database

Once the local database container is running, you can automatically fill it with users and recipes using the provided script:

./scripts/local-development/seed-database.sh

This script will:

  1. Create two test users.
  2. Seed the database with sample recipe data.

Alternatively, you can run similar steps manually:

  • curl http://localhost:5000/api/auth/register -H "Content-Type: application/json" -d '{"name":"Albert", "email":"a@a.com", "password":"123456"}'
  • curl http://localhost:5000/api/auth/register -H "Content-Type: application/json" -d '{"name":"Blanca", "email":"b@b.com", "password":"123456"}'
  • docker compose exec -T db psql -U postgres -d recipemanager < server/database-seed.sql

Database

Database schema changes are managed using node-pg-migrate. Migrations run automatically when the server starts. Migrations are stored in server/migrations/ as SQL files.

Creating a new migration

cd server
npm run migrate:create -- my-migration-name

This creates a new SQL file in server/migrations/ with a timestamp prefix. Edit the file to add your schema changes:

-- Up Migration
ALTER TABLE recipe ADD COLUMN description TEXT;

---- Down Migration
ALTER TABLE recipe DROP COLUMN description;

Running migrations manually

Migrations run automatically on server startup, but you can also run them manually:

cd server

# Run the next pending migration
npm run migrate:up

# Rollback the last migration
npm run migrate:down

Email account setup

Sending emails requires creating an account at https://ethereal.email. Click the 'Create Ethereal Account' button and copy-paste the user and password to the .env file environment variables EMAIL_USER and EMAIL_PASSWORD.

You can view the emails at https://ethereal.email/messages. URLs to view each email sent are also logged at the server console.

Git pre-commit hook

To check formatting and validate code on every commit, set up the Git pre-commit hook:

cp pre-commit .git/hooks

Note that the checks do not abort the commit (it's very annoying), they only inform you of any issues found. It's your responsibility to fix them and amend the commit.

The checks performed are:

  • Prettier for code formatting.
  • TypeScript compiler (tsc) for type checking.
  • ESLint for linting.
  • Terraform fmt and validate.
  • ShellCheck to lint shell scripts.
  • shfmt for shell script formatting.
    • Files are formatted with the options -i 2 -ci -bn, following Google's shell style. Run shfmt -i 2 -ci -bn -w <file> <directory> to format a file and/or directory.

Deploy infrastructure with Terraform

Create S3 buckets for Terraform state

Before deploying any infrastructure, you need to create the S3 buckets that Terraform will use to store its state. Use the provided script to do this:

./scripts/bootstrap/create-state-bucket.sh dev  # Or prod

Web (Frontend)

To deploy the React frontend to S3 and CloudFront:

cd terraform/web/environments/dev # Or prod
# Edit terraform.tfvars with your values if needed

# Initialize using the generated backend.config file created by scripts/bootstrap/create-state-bucket.sh
terraform init -backend-config="backend.config"

Server (API)

1. Create AWS Infrastructure

Create the AWS infrastructure (VPC, EKS, RDS, ECR, etc.):

./scripts/server/create-aws-infrastructure.sh dev  # Or prod

Edit the terraform/server/environments/dev/terraform.tfvars or prod/terraform.tfvars file to adjust values to your desire before running the script.

This script will:

  • Initialize Terraform
  • Create VPC, EKS cluster, RDS database, ECR repository, Pod Identity, ACM certificate for the API endpoint and application secrets (JWT, email credentials)
  • Install Load Balancer Controller, ExternalDNS and Karpenter
  • Create Karpenter NodePool and EC2NodeClass
  • Display next steps

This process takes approximately 20-30 minutes.

2. Build and Push Docker Image

After the AWS infrastructure is created, build and push the Docker image to ECR:

./scripts/server/build-push-image-ecr.sh dev  # Or prod

This script will:

  • Build the Docker image with a timestamp-based tag
  • Log in to ECR and push the image
  • Output the IMAGE_TAG to use for deployment

3. Deploy Server Application to EKS

Deploy the server application to the EKS cluster:

./scripts/server/deploy-server-eks.sh dev <image_tag>  # Use the IMAGE_TAG from build-push-image-ecr.sh output

For example:

./scripts/server/deploy-server-eks.sh dev 2026-01-15-12h00m00s

This script will:

  • Configure kubectl to connect to the EKS cluster
  • Fetch configuration from Terraform outputs, terraform.tfvars file, AWS Secrets Manager, etc.
  • Process Kubernetes manifests using Kustomize and replace placeholders
  • Apply the manifests to deploy the server to EKS and wait for the deployment to complete
  • When the Ingress is created, the Load Balancer Controller provisions an Application Load Balancer and ExternalDNS creates the Route53 A record for the API endpoint pointing to the ALB.

4. Delete AWS Infrastructure

To delete all AWS infrastructure:

./scripts/server/delete-aws-infrastructure.sh dev  # Or prod

This script will:

  • Prompt for confirmation
  • Delete Kubernetes resources
  • Delete AWS infrastructure (VPC, EKS, RDS, ECR, etc.) in the correct order

Warning: This will permanently delete all infrastructure resources, including the ECR images!

Automatic deployment with GitHub Actions

Once the AWS infrastructure is deployed, you can set up automatic deployment of the React web app to S3 and CloudFront using GitHub Actions.

At the GitHub repository, go to Settings → Environments and create an environment named "dev" or "prod". On that page, click the environment and add the following environment variables (not secrets):

Environment variable Value
AWS_REGION us-east-1
AWS_GITHUB_ACTIONS_OIDC_ROLE_ARN terraform output oidc_role_arn
WEB_S3_BUCKET terraform output website_s3_bucket_name
WEB_CLOUDFRONT_DISTRIBUTION_ID terraform output website_cloudfront_distribution_id
VITE_API_BASE_URL https://api.recipemanager.link/api

Manually deploy the React web app to AWS S3 and CloudFront

This is done automatically using GitHub Actions (see Automatic deployment with GitHub Actions), but you can also do it manually:

cd web
npm run build
aws s3 sync build s3://<s3-bucket-name> --delete
aws cloudfront create-invalidation --distribution-id <distribution-id> --paths '/*'

About

A Full Stack app built with Express, React, AWS and Kubernetes (EKS)

Topics

Resources

Stars

Watchers

Forks