Docker-based backup solution for AWS S3 - Simple, reliable, and resource-efficient backup management with a clean web interface.
More screenshots: Jobs | History | Settings
- Features
- Quick Start
- Prerequisites
- Configuration
- Usage
- Architecture
- Web Interface
- Logging
- Troubleshooting
- Security
- Docker Deployment
- Development
- FAQ
- License
- Support
- Credits
- 🔐 Secure - Encrypted AWS credentials, password-protected access, CSRF protection
- 📦 Flexible Backup Sources - Local directories and remote SSH/SFTP servers
- ⚙️ Multiple Compression Formats - ZIP, TAR.GZ, TAR.BZ2, TAR.XZ, or none
- ☁️ AWS S3 Storage - Automatic uploads with multipart support for large files
- 📅 Scheduled Backups - Cron-based scheduling with APScheduler
- 🔄 Retention Policies - Automatic cleanup of old backups
- 📊 Web Dashboard - Clean, intuitive interface for managing backups
- 🐳 Docker-Ready - Single container deployment with docker-compose
- 📝 Comprehensive Logging - Detailed logs for troubleshooting and audit
Run Mackuper directly from Docker Hub:
docker run -d \
--name mackuper \
-p 5000:5000 \
-v mackuper-data:/data \
--restart unless-stopped \
lirem/mackuper:latestAccess the web interface:
- Open http://localhost:5000 in your browser
- Complete the setup wizard (create admin account, configure AWS S3)
- Start creating backup jobs!
Note: This creates a Docker volume named mackuper-data for persistent storage.
-
Clone the repository:
git clone https://github.com/lirem/mackuper.git cd mackuper -
Create environment file (optional):
cp .env.example .env # Edit .env with your preferred settings -
Start the application:
docker-compose up -d
-
Access the web interface: Open http://localhost:5000 in your browser
-
Complete the setup wizard:
- Create admin account
- Configure AWS S3 credentials
- Test connection
That's it! You're ready to create backup jobs!
-
Install Python 3.11+:
python --version # Should be 3.11 or higher -
Install dependencies:
pip install -r requirements.txt
-
Run the development server:
python run.py
-
Access the application: Open http://localhost:5000 in your browser
- Docker and Docker Compose (for Docker deployment)
- OR Python 3.11+ (for manual installation)
- AWS S3 Bucket with appropriate permissions
- AWS Access Credentials (Access Key ID and Secret Access Key)
Your AWS IAM user/role needs the following S3 permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::your-backup-bucket-name"
},
{
"Sid": "ReadWriteObjects",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": "arn:aws:s3:::your-backup-bucket-name/*"
}
]
}Important Notes:
- Replace
your-backup-bucket-namewith your actual bucket name s3:AbortMultipartUploadands3:ListMultipartUploadPartsare required for large file uploads (>100MB)s3:GetBucketLocationis needed for S3 connection testing
Create a .env file based on .env.example:
# Flask Configuration
FLASK_ENV=production # Environment: production or development
SECRET_KEY=your-secret-key-here-change-this # Change this! Generate with: python -c "import secrets; print(secrets.token_hex(32))"
# Database (paths inside Docker container)
DATABASE_URL=sqlite:////data/mackuper.db # Database file location inside container
# Directories (paths inside Docker container)
TEMP_DIR=/data/temp # Temporary files during backup creation
LOCAL_BACKUP_DIR=/data/local_backups # Optional local backup storage
# Server
PORT=5000 # Port inside container (mapped to host port in docker-compose.yml)
# HTTPS (optional)
HTTPS_ENABLED=false # Enable if running behind HTTPS reverse proxyAWS credentials are configured through the web interface during setup:
- Access the setup wizard at http://localhost:5000/setup
- Enter your AWS credentials
- Select your bucket and region
- Test the connection
Credentials are encrypted and stored securely in the database.
-
Navigate to the Jobs tab in the web interface
-
Click "Create New Job"
-
Configure the backup:
- Job Name: Descriptive name for your backup
- Source Type: Local or SSH
- Source Path: Directory or file to backup
- Compression: Choose format (ZIP, TAR.GZ, etc.)
- Schedule: Cron expression (e.g.,
0 2 * * *for daily at 2 AM) - Retention: How many days to keep backups
-
Save and enable the job
Back up files from the server running Mackuper:
- Source Path:
/path/to/directoryor/path/to/file - Example:
/home/user/documents
Important: By default, the Docker container can only access files inside the container. To backup files from your host machine, you need to mount them into the container.
Mounting Host Directories:
-
Using docker-compose.yml (Recommended):
Edit docker-compose.yml to add volume mounts:
volumes: - mackuper-data:/data - /path/on/host:/backup/documents:ro # Mount host directory (read-only) - /var/www/html:/backup/websites:ro # Another example
Then restart:
docker-compose down && docker-compose up -d -
Using docker run command:
Add
-vflags when running the container:docker run -d \ --name mackuper \ -p 5000:5000 \ -v mackuper-data:/data \ -v /home/user/documents:/backup/documents:ro \ -v /var/www/html:/backup/websites:ro \ --restart unless-stopped \ lirem/mackuper:latest
Creating Backup Jobs for Mounted Directories:
- Source Type: Select "Local"
- Source Path: Use the container path (e.g.,
/backup/documents) - The
:roflag mounts directories as read-only for safety
Example:
- Host path:
/home/user/documents - Mount in docker-compose:
/home/user/documents:/backup/docs:ro - Backup job source path:
/backup/docs
Back up files from a remote server via SSH/SFTP:
- Hostname: Remote server address
- Port: SSH port (default: 22)
- Username: SSH username
- Authentication: Password or private key
- Source Path: Remote path to backup
0 2 * * * # Daily at 2:00 AM
0 */6 * * * # Every 6 hours
0 0 * * 0 # Weekly on Sunday at midnight
0 3 1 * * # Monthly on the 1st at 3:00 AM
*/30 * * * * # Every 30 minutes
0 0,12 * * * # Twice daily (midnight and noon)- Backend: Flask (Python 3.11+), SQLAlchemy, APScheduler
- Frontend: Alpine.js, Tailwind CSS
- Storage: SQLite, AWS S3
- Deployment: Docker, Gunicorn
- Security: Fernet encryption, Flask-Login, CSRF protection
./data/ # Persistent volume
├── mackuper.db # SQLite database
├── logs/ # Application logs
│ └── mackuper.log # Main log file (10MB max, 10 backups)
├── temp/ # Temporary backup files (auto-cleanup)
└── local_backups/ # Optional local backup storage
└── {job_name}/
└── {YYYY}/{MM}/
- Acquire Source - Download files from local or SSH source
- Compress - Create archive with selected compression format
- Upload to S3 - Upload to S3 with structured key:
{job}/{YYYY}/{MM}/{filename} - Local Storage - Optionally store locally
- Cleanup - Remove temporary files
- Log Results - Record in backup history
your-bucket/
├── job-name/
│ ├── 2025/
│ │ ├── 01/
│ │ │ ├── backup_20250101_020000.tar.gz
│ │ │ └── backup_20250102_020000.tar.gz
│ │ └── 02/
│ │ └── backup_20250201_020000.tar.gz
- Overview Cards: Total jobs, active jobs, last backup, scheduler status
- Statistics: Success rate, total backups, total size
- Recent Activity: Last 10 backup executions with status
- Job Management: Create, edit, delete, enable/disable
- Manual Execution: Run any job immediately
- Job Configuration: Source, schedule, compression, retention
- Backup History: View all past backup executions
- Filtering: By status, job, time range
- Log Viewer: View detailed execution logs
- Cleanup: Delete old history records (minimum 30 days)
- AWS Configuration: Update S3 credentials and bucket
- Connection Test: Verify S3 access
- Password Management: Change admin password
- About: Version and system information
Mackuper provides comprehensive logging for debugging and audit purposes:
- Log Location:
./data/logs/mackuper.log - Rotation: 10MB per file, 10 backup files
- Format:
[timestamp] LEVEL [module.function:line] message - Levels: DEBUG (development), INFO (production)
View logs in real-time:
# Docker
docker-compose logs -f mackuper
# Manual installation
tail -f ./data/logs/mackuper.logLog content includes:
- Backup execution steps
- S3 upload/download operations
- SSH connections
- Errors and exceptions with stack traces
- User actions (login, job changes, etc.)
"Server returned an invalid response"
- Check server logs:
docker-compose logs mackuperortail -f data/logs/mackuper.log - Verify AWS credentials are correct
- Ensure S3 bucket exists and is accessible
"Access denied to bucket"
- Verify IAM permissions include
s3:PutObject,s3:ListBucket,s3:DeleteObject - Check bucket policy doesn't block access
- Verify region matches bucket location
"Bucket does not exist"
- Verify bucket name is spelled correctly (no typos)
- Ensure bucket is in the selected region
- Check bucket wasn't deleted
Backup fails with SSH connection error
- Verify SSH credentials are correct
- Test SSH connection manually:
ssh user@hostname - Check firewall allows SSH (port 22 or custom)
- Verify remote path exists and is readable
Backup fails during compression
- Check available disk space in
/data/temp - Verify source files aren't locked/in-use
- Check file permissions
Upload to S3 fails
- Verify internet connectivity
- Check AWS credentials haven't expired
- Ensure S3 bucket still exists
- Verify IAM permissions
Job doesn't run on schedule
- Check scheduler status in Dashboard (should show "Running")
- Verify cron expression is valid (use https://crontab.guru)
- Check container hasn't been stopped
- Review logs for scheduler errors
"Database is locked"
- Ensure only one Mackuper instance is running
- Check no other process is accessing
mackuper.db - Restart the container:
docker-compose restart
Data disappeared after restart
- Verify data volume is properly mounted
- Check
docker-compose.ymlvolume configuration - Ensure
./datadirectory permissions
Slow backup uploads
- Large files use multipart upload (automatic for files > 100MB)
- Check network bandwidth to S3
- Consider different S3 region closer to your server
High memory usage
- Compression is memory-intensive for large files
- Consider using less compression (e.g., ZIP instead of XZ)
- Or use "none" compression for pre-compressed data
-
Change Default Secret Key
- Set a strong, random
SECRET_KEYin.env - Generate:
python -c "import secrets; print(secrets.token_hex(32))"
- Set a strong, random
-
Use Strong Passwords
- Minimum 8 characters
- Mix of uppercase, lowercase, numbers
-
Restrict Network Access
- Use firewall to limit access to port 5000
- Consider running behind reverse proxy with HTTPS
-
Regular Backups
- Back up
./data/mackuper.dbdatabase file - Store backup of AWS credentials separately
- Back up
-
Monitor Logs
- Review logs regularly for unusual activity
- Check for failed login attempts
- AWS Credentials: Encrypted with Fernet (AES-128) before storage
- SSH Passwords: Encrypted with Fernet before storage
- Master Key: Derived from user password using PBKDF2-HMAC-SHA256 (480,000 iterations)
- Passwords: Hashed with PBKDF2-SHA256 before storage
# Build custom image
docker build -f docker/Dockerfile -t mackuper:latest .
# Run with docker-compose
docker-compose up -d
# Check status
docker-compose ps
# View logs
docker-compose logs -f mackuper
# Stop
docker-compose down
# Update to new version
git pull
docker-compose down
docker-compose up -d --buildBackup your data:
# Backup database and files
tar -czf mackuper-backup-$(date +%Y%m%d).tar.gz ./dataRestore from backup:
# Stop container
docker-compose down
# Restore data
tar -xzf mackuper-backup-YYYYMMDD.tar.gz
# Start container
docker-compose up -dEdit docker-compose.yml:
ports:
- "8080:5000" # Change 8080 to your preferred portThen restart:
docker-compose down
docker-compose up -d# Install test dependencies
pip install pytest pytest-cov
# Run tests
pytest tests/
# With coverage
pytest --cov=app tests/mackuper/
├── app/ # Application package
│ ├── __init__.py # Flask app factory
│ ├── models.py # Database models
│ ├── auth.py # Authentication
│ ├── scheduler.py # Job scheduler
│ ├── backup/ # Backup system
│ ├── routes/ # API endpoints
│ ├── utils/ # Utilities
│ └── static/ # Frontend assets
├── templates/ # HTML templates
├── docker/ # Docker files
├── data/ # Persistent data (gitignored)
├── tests/ # Test suite
└── requirements.txt # Python dependencies
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
Q: Can I backup to multiple S3 buckets? A: Currently one S3 bucket per installation. Run multiple Mackuper instances for multiple buckets.
Q: Does it support backup encryption? A: S3 server-side encryption (SSE) is supported if enabled on your bucket. Client-side encryption is not currently implemented.
Q: Can I restore backups from the UI? A: Not currently. Download from S3 and extract manually. A restore feature may be added in future versions.
Q: What happens if backup fails? A: Failed backups are logged with detailed error messages in the History tab and log files. You'll see the failure status in the dashboard.
Q: Can I run backups manually? A: Yes! Click "Run Now" on any job in the Jobs tab to execute immediately.
Q: How do I migrate to a new server?
A: Copy the ./data directory to the new server and start Mackuper. All settings, jobs, and history will be preserved.
Apache License 2.0 - see LICENSE file for details.
- Issues: https://github.com/lirem/mackuper/issues
- Documentation: https://github.com/lirem/mackuper/wiki
Built with:
