Easy Deploy is a PaaS (Platform as a Service) that simplifies application deployment on AWS, designed for developers who want to focus on building great applications without worrying about DevOps or cloud infrastructure. Our platform handles all the complex AWS setup, containerization, and deployment processes automatically.
Key benefits:
- Zero DevOps Knowledge Required: Deploy your applications without understanding AWS, containers, or infrastructure
- Automated Infrastructure: We handle all AWS resource provisioning and management
- Secure by Default: Built-in security best practices for your applications
- Scalable Architecture: Applications automatically scale based on demand
- Cost-Effective: Pay only for the resources your application actually uses
Easy Deploy uses two distinct CI/CD pipelines:
- Platform CI/CD Pipeline (GitHub Actions):
- Manages the Easy Deploy platform itself
- Builds and deploys platform updates
- Handles infrastructure changes
- Ensures platform reliability and security
- Application CI/CD Pipeline (AWS CodeBuild):
- Manages your application deployments
- Builds your application container
- Pushes to Amazon ECR
- Deploys to ECS
- Handles application updates
When you deploy an application through Easy Deploy, the following process occurs:
- AWS User Setup:
- System checks if you have an AWS user account
- If not, creates a new IAM user with necessary permissions
- Sets up secure access credentials
- Configures user-specific AWS resources
- Repository Setup:
- Clones your GitHub repository
- Stores it in a secure EFS (Elastic File System) location
- Copies pre-configured pipeline templates
- Sets up build and deployment configurations
- Application Deployment:
- CodeBuild accesses your code from EFS
- Builds your application container
- Pushes the image to ECR
- Deploys the container to ECS
- Configures load balancing and auto-scaling
- Stores build artifacts in S3
The system consists of several key components:
- ECS Cluster: A scalable container orchestration service
- ECR Repository: For storing Docker images
- CodeBuild Pipeline: For continuous integration and deployment
- VPC with Public/Private Subnets: For secure networking
- Application Load Balancer: For routing traffic to containers
- Auto Scaling Group: For managing EC2 instances
- AWS Account with appropriate permissions
- AWS CLI configured with credentials
- Terraform installed (version >= 1.0.0)
- Docker installed
- Python 3.8+ (for local development)
Backend/
├── app/ # Application source code
│ └── Pipelines/ # CI/CD and infrastructure code
│ ├── Common/ # Shared infrastructure components
│ │ └── Terraform/ # Terraform configurations
│ └── Backend/ # Backend-specific configurations
├── terraform/ # Root Terraform configurations
├── images/ # Architecture and documentation images
└── env/ # Environment configurations
The ECS cluster is configured with the following key features:
-
Capacity Providers: Uses a mix of Fargate and EC2 launch types
- Fargate for serverless container management
- EC2 for cost-optimized workloads
-
Task Definitions:
- Defines container specifications
- CPU and memory requirements
- Port mappings
- Environment variables
- Volume mounts
-
Services:
- Maintains desired count of tasks
- Handles task placement and scheduling
- Integrates with Application Load Balancer
- Supports rolling deployments
-
Container Instances:
- EC2 instances registered to the cluster
- Managed by Auto Scaling Group
- Runs ECS container agent
- Reports resource availability
-
Monitoring & Logging:
- CloudWatch integration for metrics
- Container insights enabled
- Centralized logging
- Health checks and alerts
The cluster uses placement strategies to optimize container distribution across availability zones while maintaining high availability. Task networking is handled through awsvpc mode, giving each task its own ENI and security group.
- Manages container deployments
- Supports auto-scaling
- Integrates with AWS services
Source of Truth
- Provides persistent storage across all containers and EC2 instances
- Mounted at
/mnt/reposon EC2 instances and containers - Automatically scales storage capacity up and down
- Supports concurrent access from multiple availability zones
- Uses NFSv4 protocol for mounting
- Configured with the following:
- Mount target in each AZ's private subnet
- Security group allowing NFS traffic (port 2049)
- IAM roles and policies for ECS tasks
- Lifecycle policies for backup and retention
Mount Process:
- EFS mount helper installed on EC2 instances
- Mount target created in each AZ's subnet
- EC2 instances mount EFS to
/mnt/reposat boot time - Containers mount EFS via task definition volume configuration
- All repository data stored under
/mnt/repos/<owner>/<repo-name>
Benefits:
- Shared storage across all deployment components
- Persistent data survives container restarts
- Automatic backups and high availability
- Elastic scaling without disruption
- VPC with public and private subnets
- Public subnets for internet-facing resources
- Private subnets for internal resources
- Custom route tables for each subnet type
- Internet Gateway for public internet access
- CIDR block allocation for IP addressing
- Security groups for traffic control
- Inbound/outbound rules based on ports and protocols
- Service-specific security groups
- Default deny-all with explicit allows
- Stateful packet filtering
- Load Balancer in each public subnet
- Application Load Balancer (ALB) for HTTP/HTTPS traffic
- Health checks and target group routing
- SSL/TLS termination
- Cross-zone load balancing enabled
- Network ACLs for additional security
- Subnet-level traffic control
- Stateless packet filtering
- Ordered rule evaluation
- VPC Endpoints for AWS services
- Interface endpoints for ECR, CloudWatch
- Gateway endpoints for S3, DynamoDB
- Reduced data transfer costs
- Enables outbound internet access for resources in private subnets
- Deployed in public subnets with Elastic IP
- Routes traffic from private subnets through public subnets
- Provides security by blocking inbound connections
- Automatically scales based on traffic volume
- Highly available across availability zones

- Secure way to connect to EC2 instances without public IPs
- Eliminates need for bastion hosts
- Uses AWS IAM for authentication and authorization
- Supports SSH and RDP connections
- Traffic stays within VPC network
- Provides audit logs of all connection attempts
- Regional service with automatic scaling

-
Navigate to the Terraform directory:
cd Backend/terraform -
Initialize Terraform: Initialize the remote backend by running setup_backend.sh script which:
- Creates an S3 bucket for storing Terraform state
- Creates a DynamoDB table for state locking
- Configures backend.tf with the created resources
- Ensures proper state management across team members
terraform init
-
Apply the infrastructure: Apply the Terraform configuration to create all infrastructure resources:
- Creates VPC and networking components
- Sets up ECS cluster and services
- Configures load balancer and target groups
- Creates ECR repository
- Sets up CodeBuild project and IAM roles
- Establishes security groups and access controls
- Provisions NAT Gateway and VPC endpoints
- Creates EC2 Instance Connect Endpoint
The --auto-approve flag skips the interactive approval step The --lock=false flag allows concurrent Terraform operations
terraform apply --auto-approve --lock=false
-
Build and push your Docker image:
docker build -t your-app:latest . docker tag your-app:latest $ECR_REPO_URL:latest docker push $ECR_REPO_URL:latest
-
The CodeBuild pipeline will automatically execute the following steps:
- Access the source code from the EFS mount point where repositories are stored
- Install dependencies and run tests based on buildspec.yml configuration
- Build the application using the specified entry point and port
- Create an optimized Docker image with the application
- Tag and push the image to Amazon ECR with a unique deployment tag
- Update the ECS task definition with the new image
- Deploy the updated container to ECS and configure load balancer routing
- Monitor deployment health and roll back if needed
- Log all build and deployment steps to CloudWatch
The system uses AWS CodeBuild for continuous integration and deployment:
-
Build Phase:
- Access source code from EFS
- Builds Docker image
- Pushes to ECR
-
Deploy Phase:
- Triggers ECS deployment
- Updates ECS service with new task definition
- Updates ECS task definition
- Deploys new container
- Updates load balancer
- IAM roles and policies for least privilege access
- Security groups for network isolation
- ECR image scanning
- VPC with public/private subnet separation
- CloudWatch integration for logs and metrics
- Auto-scaling based on CPU/memory usage
- Health checks for container instances
-
Container Health Checks Failing
- Check application logs in CloudWatch
- Verify security group configurations
- Ensure correct port mappings
-
Deployment Failures
- Check CodeBuild logs
- Verify ECR repository access
- Ensure sufficient IAM permissions
You can connect to running containers using AWS Systems Manager Session Manager:
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.







