A Full Stack app built with Node, React, PostgreSQL, REST API, AWS, Kubernetes (EKS) and GitHub Actions
Live site: https://recipemanager.link
- Hosted on AWS.
- Infrastructure as Code with Terraform
- CI/CD with GitHub Actions.
- Local development with Docker Compose.
- 100% TypeScript, zero JavaScript.
- React single-page application.
- State management with Valtio.
- Routing with React Router 6.
- UI design with Chakra UI.
- Node.js server built with Express.js.
- Database with PostgreSQL.
- Data validation with zod.
- Testing
- Unit tests for functions with Jest.
- Unit tests for route handlers and middleware with node-mocks-http.
- Integration tests for routes with supertest.
- Frontend deployed to S3 and CloudFront automatically using GitHub Actions.
- EKS cluster for server deployment.
- RDS PostgreSQL database.
- ECR for Docker image storage.
- Secrets Manager for application secrets (RDS master password, JWT secret).
- Ingress with AWS Load Balancer Controller.
- Karpenter for automatic provisioning of nodes based on workload.
- ExternalDNS for automatic Route53 DNS record management.
- Managed Node Group that runs CoreDNS, Load Balancer Controller, Karpenter controller and ExternalDNS.
- Pod Identity.
- Kustomize for managing Kubernetes manifests.
- Authentication: register, login, validate email, recover password.
- Settings: change the user's name, email and password. Delete the user account.
- Recipe: publish, edit and delete recipes.
The application is available at:
- Web (React): http://localhost:3000
- Server (API): http://localhost:5000
- Database: localhost:5432
To run the app locally, do:
cp .env.local .env
# (Optional) Edit .env to adjust values
# Start all services
docker compose up --build
# (Optional) Seed the database with users and recipes
./scripts/local-development/seed-database.sh
# View service status
docker compose ps
# Stop everything, but keep the database data
docker compose down
# Stop everything and discard the database data
docker compose down --volumesTo run only a single service locally do:
cd server # Or cd web
npm install
npm run devThe local PostgreSQL database is created automatically when you run docker compose up --build. You can interact with it from within the Docker container.
# Connect to database from within the Docker container
docker compose exec db psql -U postgres -d recipemanager
# Backup database
docker compose exec db pg_dump -U postgres recipemanager > backup.sql
# Restore database
docker compose exec -T db psql -U postgres -d recipemanager < backup.sqlThe database port is exposed to localhost:5432 on the host machine. This allows you to connect to the database using a client from your machine.
# Connect to database from your host machine
psql -h localhost -p 5432 -U postgres -d recipemanagerYou will be prompted for the password, which is defined in your .env file.
Once the local database container is running, you can automatically fill it with users and recipes using the provided script:
./scripts/local-development/seed-database.shThis script will:
- Create two test users.
- Seed the database with sample recipe data.
Alternatively, you can run similar steps manually:
curl http://localhost:5000/api/auth/register -H "Content-Type: application/json" -d '{"name":"Albert", "email":"a@a.com", "password":"123456"}'curl http://localhost:5000/api/auth/register -H "Content-Type: application/json" -d '{"name":"Blanca", "email":"b@b.com", "password":"123456"}'docker compose exec -T db psql -U postgres -d recipemanager < server/database-seed.sql
Database schema changes are managed using node-pg-migrate.
Migrations run automatically when the server starts.
Migrations are stored in server/migrations/ as SQL files.
cd server
npm run migrate:create -- my-migration-nameThis creates a new SQL file in server/migrations/ with a timestamp prefix. Edit the file to add your schema changes:
-- Up Migration
ALTER TABLE recipe ADD COLUMN description TEXT;
---- Down Migration
ALTER TABLE recipe DROP COLUMN description;Migrations run automatically on server startup, but you can also run them manually:
cd server
# Run the next pending migration
npm run migrate:up
# Rollback the last migration
npm run migrate:downSending emails requires creating an account at https://ethereal.email. Click the 'Create Ethereal Account' button and copy-paste the user and password to the .env file environment variables EMAIL_USER and EMAIL_PASSWORD.
You can view the emails at https://ethereal.email/messages. URLs to view each email sent are also logged at the server console.
To check formatting and validate code on every commit, set up the Git pre-commit hook:
cp pre-commit .git/hooksNote that the checks do not abort the commit (it's very annoying), they only inform you of any issues found. It's your responsibility to fix them and amend the commit.
The checks performed are:
- Prettier for code formatting.
- TypeScript compiler (tsc) for type checking.
- ESLint for linting.
- Terraform fmt and validate.
- ShellCheck to lint shell scripts.
- shfmt for shell script formatting.
- Files are formatted with the options
-i 2 -ci -bn, following Google's shell style. Runshfmt -i 2 -ci -bn -w <file> <directory>to format a file and/or directory.
- Files are formatted with the options
Before deploying any infrastructure, you need to create the S3 buckets that Terraform will use to store its state. Use the provided script to do this:
./scripts/bootstrap/create-state-bucket.sh dev # Or prodTo deploy the React frontend to S3 and CloudFront:
cd terraform/web/environments/dev # Or prod
# Edit terraform.tfvars with your values if needed
# Initialize using the generated backend.config file created by scripts/bootstrap/create-state-bucket.sh
terraform init -backend-config="backend.config"Create the AWS infrastructure (VPC, EKS, RDS, ECR, etc.):
./scripts/server/create-aws-infrastructure.sh dev # Or prodEdit the terraform/server/environments/dev/terraform.tfvars or prod/terraform.tfvars file to adjust values to your desire before running the script.
This script will:
- Initialize Terraform
- Create VPC, EKS cluster, RDS database, ECR repository, Pod Identity, ACM certificate for the API endpoint and application secrets (JWT, email credentials)
- Install Load Balancer Controller, ExternalDNS and Karpenter
- Create Karpenter NodePool and EC2NodeClass
- Display next steps
This process takes approximately 20-30 minutes.
After the AWS infrastructure is created, build and push the Docker image to ECR:
./scripts/server/build-push-image-ecr.sh dev # Or prodThis script will:
- Build the Docker image with a timestamp-based tag
- Log in to ECR and push the image
- Output the IMAGE_TAG to use for deployment
Deploy the server application to the EKS cluster:
./scripts/server/deploy-server-eks.sh dev <image_tag> # Use the IMAGE_TAG from build-push-image-ecr.sh outputFor example:
./scripts/server/deploy-server-eks.sh dev 2026-01-15-12h00m00sThis script will:
- Configure kubectl to connect to the EKS cluster
- Fetch configuration from Terraform outputs, terraform.tfvars file, AWS Secrets Manager, etc.
- Process Kubernetes manifests using Kustomize and replace placeholders
- Apply the manifests to deploy the server to EKS and wait for the deployment to complete
- When the Ingress is created, the Load Balancer Controller provisions an Application Load Balancer and ExternalDNS creates the Route53 A record for the API endpoint pointing to the ALB.
To delete all AWS infrastructure:
./scripts/server/delete-aws-infrastructure.sh dev # Or prodThis script will:
- Prompt for confirmation
- Delete Kubernetes resources
- Delete AWS infrastructure (VPC, EKS, RDS, ECR, etc.) in the correct order
Warning: This will permanently delete all infrastructure resources, including the ECR images!
Once the AWS infrastructure is deployed, you can set up automatic deployment of the React web app to S3 and CloudFront using GitHub Actions.
At the GitHub repository, go to Settings → Environments and create an environment named "dev" or "prod". On that page, click the environment and add the following environment variables (not secrets):
| Environment variable | Value |
|---|---|
AWS_REGION |
us-east-1 |
AWS_GITHUB_ACTIONS_OIDC_ROLE_ARN |
terraform output oidc_role_arn |
WEB_S3_BUCKET |
terraform output website_s3_bucket_name |
WEB_CLOUDFRONT_DISTRIBUTION_ID |
terraform output website_cloudfront_distribution_id |
VITE_API_BASE_URL |
https://api.recipemanager.link/api |
This is done automatically using GitHub Actions (see Automatic deployment with GitHub Actions), but you can also do it manually:
cd web
npm run build
aws s3 sync build s3://<s3-bucket-name> --delete
aws cloudfront create-invalidation --distribution-id <distribution-id> --paths '/*'