Layer8 is a comprehensive privacy-first platform designed to protect sensitive data when interacting with AI systems. By providing advanced anonymization, encryption, and secure processing capabilities, Layer8 ensures that your data remains private while still leveraging the full power of AI.
- Data Anonymization & De-anonymization: Automatically identify and mask sensitive information before sending to AI models
- Chrome Extension: Locally encrypt data before sending to AI platforms and decrypt responses
- Local LLM Deployment: Self-hosted AI models via Docker for completely private processing
- Secure Cloud Processing: SGX enclave protection for sensitive data when using cloud services
- Semantic Analysis: Advanced NLP to detect and protect sensitive entities and patterns
- RAG System: Private retrieval-augmented generation for data analysis without exposing information
- User-Friendly Interface: Clean, modern UI for easy interaction with secure AI tools
Layer8 consists of four main components:
Backend services for data anonymization, LLM integration, and processing:
api/
โโโ main.py # Main API controller
โโโ llm_integration.py # LLM provider integration
โโโ prompt_generator.py # System/user prompt generation
โโโ config.json # Configuration file
โโโ nlp_data_anonymizer/ # Core anonymization package
โโโ api_service/ # API server components
User interface built with Svelte:
frontend/
โโโ src/ # Source files
โ โโโ routes/ # Application routes
โ โโโ lib/ # Shared components and utilities
โ โโโ assets/ # Static assets
โโโ static/ # Public static files
โโโ package.json # Dependencies and scripts
Browser extension for encrypting data sent to AI platforms:
extension/
โโโ manifest.json # Extension configuration
โโโ popup.html # Extension popup interface
โโโ popup.js # Popup functionality
โโโ content.js # Content script for page integration
โโโ background.js # Background service worker
โโโ images/ # Extension icons and images
Retrieval-augmented generation for private data analysis:
RAG/
โโโ app.py # RAG server application
โโโ test_api.py # API testing utilities
โโโ requirements.txt # Python dependencies
The fastest way to get started with Layer8 is using our Docker image:
# Pull the image
docker pull vickydev810/layer8:latest
# Create required volumes for data persistence
docker volume create ollama
docker volume create open-webui
# Run the container
docker run -d -p 3000:8080 --gpus=all \
-v ollama:/root/.ollama \
-v open-webui:/app/backend/data \
--name open-webui \
--restart always \
vickydev810/layer8:latestOnce running, access the Web UI at http://localhost:3000
-
Navigate to the API directory:
cd api -
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Install required NLP model:
python -m spacy download en_core_web_lg
-
Copy the example environment file and configure your API keys:
cp .env.example .env # Edit .env with your API keys -
Run the API server:
python main.py
-
Navigate to the frontend directory:
cd frontend -
Install dependencies:
npm install
-
Start the development server:
npm run dev
-
Build for production:
npm run build
- Open Chrome and navigate to
chrome://extensions/ - Enable "Developer mode"
- Click "Load unpacked" and select the
extensiondirectory - The extension icon should appear in your browser toolbar
-
Navigate to the RAG directory:
cd RAG -
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Configure environment variables:
cp .env.example .env # Edit .env with your API keys -
Run the RAG server:
python app.py
cd api
python main.py --interactiveWhen prompted, enter your query containing sensitive information. The system will:
- Detect and mask sensitive entities
- Send the anonymized query to the LLM
- De-anonymize the response
- Navigate to any AI service (ChatGPT, Gemini, Grok)
- Type your message in the input field
- Click the "Encrypt" button before sending
- Your message will be encrypted before transmission
- Responses will be automatically decrypted
- Start the RAG server
- Upload your CSV or PDF file
- Ask natural language questions about your data
- Receive analyzed results without exposing sensitive information
Layer8 implements several layers of protection:
- Local Processing: Primary data anonymization happens on your local device
- Edge Computing: Process sensitive data at the edge using local LLMs
- Secure Enclaves: When using cloud services, data is protected in SGX enclaves
- Homomorphic Encryption: (Future) Perform computations on encrypted data
- Backend: Python, Flask, spaCy, PyTorch
- Frontend: Svelte, TypeScript, TailwindCSS
- Extension: JavaScript, Chrome Extensions API
- Security: Intel SGX, Secure Enclaves, Encryption Libraries
- AI Models: Support for OpenAI, Anthropic, Google Gemini, and local models
Edit api/config.json to add domain-specific sensitive terms:
"domain_specific_terms": {
"project": ["Project Alpha", "Operation Phoenix"],
"product": ["SecretProduct X9"],
"internal_code": ["XZ-1234", "ACME-7890"]
}Configure which LLM provider to use in api/config.json:
"llm": {
"provider": "openai", // Options: "openai", "anthropic", "gemini", "local"
"model": "gpt-4o",
"temperature": 0.7,
"max_tokens": 1000
}- Homomorphic Encryption: Perform AI operations on encrypted data
- Multi-user Support: Team-based access controls and sharing
- Custom Fine-tuning: Create private models tuned to your specific needs
- Federated Learning: Train models across distributed datasets without sharing data
- Audit Trails: Comprehensive logging of all AI interactions for compliance
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.
For questions, issues or feature requests, please open an issue on GitHub or contact the development team.
Secure your data. Empower your AI.
Layer8 - Where privacy meets intelligence
