Skip to content

Secure AI middleware that sanitizes sensitive data before interacting with AI models and reconstructs responses with original context.

Notifications You must be signed in to change notification settings

akshit2941/Layer8

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

54 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Layer8 - Secure AI Interaction Platform

Overview

Layer8 is a comprehensive privacy-first platform designed to protect sensitive data when interacting with AI systems. By providing advanced anonymization, encryption, and secure processing capabilities, Layer8 ensures that your data remains private while still leveraging the full power of AI.

๐Ÿ›ก๏ธ Core Features

  • Data Anonymization & De-anonymization: Automatically identify and mask sensitive information before sending to AI models
  • Chrome Extension: Locally encrypt data before sending to AI platforms and decrypt responses
  • Local LLM Deployment: Self-hosted AI models via Docker for completely private processing
  • Secure Cloud Processing: SGX enclave protection for sensitive data when using cloud services
  • Semantic Analysis: Advanced NLP to detect and protect sensitive entities and patterns
  • RAG System: Private retrieval-augmented generation for data analysis without exposing information
  • User-Friendly Interface: Clean, modern UI for easy interaction with secure AI tools

๐Ÿงฉ Project Architecture Diagram

Architecture Diagram

๐Ÿ—๏ธ Project Structure

Layer8 consists of four main components:

API Service

Backend services for data anonymization, LLM integration, and processing:

api/
โ”œโ”€โ”€ main.py                  # Main API controller
โ”œโ”€โ”€ llm_integration.py       # LLM provider integration
โ”œโ”€โ”€ prompt_generator.py      # System/user prompt generation
โ”œโ”€โ”€ config.json              # Configuration file
โ”œโ”€โ”€ nlp_data_anonymizer/     # Core anonymization package
โ””โ”€โ”€ api_service/             # API server components

Frontend

User interface built with Svelte:

frontend/
โ”œโ”€โ”€ src/                     # Source files
โ”‚   โ”œโ”€โ”€ routes/              # Application routes
โ”‚   โ”œโ”€โ”€ lib/                 # Shared components and utilities
โ”‚   โ””โ”€โ”€ assets/              # Static assets
โ”œโ”€โ”€ static/                  # Public static files
โ””โ”€โ”€ package.json             # Dependencies and scripts

Chrome Extension

Browser extension for encrypting data sent to AI platforms:

extension/
โ”œโ”€โ”€ manifest.json            # Extension configuration
โ”œโ”€โ”€ popup.html               # Extension popup interface
โ”œโ”€โ”€ popup.js                 # Popup functionality
โ”œโ”€โ”€ content.js               # Content script for page integration
โ”œโ”€โ”€ background.js            # Background service worker
โ””โ”€โ”€ images/                  # Extension icons and images

RAG System

Retrieval-augmented generation for private data analysis:

RAG/
โ”œโ”€โ”€ app.py                   # RAG server application
โ”œโ”€โ”€ test_api.py              # API testing utilities
โ””โ”€โ”€ requirements.txt         # Python dependencies

๐Ÿš€ Getting Started

Quick Start with Docker

The fastest way to get started with Layer8 is using our Docker image:

# Pull the image
docker pull vickydev810/layer8:latest

# Create required volumes for data persistence
docker volume create ollama
docker volume create open-webui

# Run the container
docker run -d -p 3000:8080 --gpus=all \
  -v ollama:/root/.ollama \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart always \
  vickydev810/layer8:latest

Once running, access the Web UI at http://localhost:3000

Manual Installation

API Service Setup

  1. Navigate to the API directory:

    cd api
  2. Create a virtual environment:

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install dependencies:

    pip install -r requirements.txt
  4. Install required NLP model:

    python -m spacy download en_core_web_lg
  5. Copy the example environment file and configure your API keys:

    cp .env.example .env
    # Edit .env with your API keys
  6. Run the API server:

    python main.py

Frontend Setup

  1. Navigate to the frontend directory:

    cd frontend
  2. Install dependencies:

    npm install
  3. Start the development server:

    npm run dev
  4. Build for production:

    npm run build

Chrome Extension Setup

  1. Open Chrome and navigate to chrome://extensions/
  2. Enable "Developer mode"
  3. Click "Load unpacked" and select the extension directory
  4. The extension icon should appear in your browser toolbar

RAG System Setup

  1. Navigate to the RAG directory:

    cd RAG
  2. Create a virtual environment:

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install dependencies:

    pip install -r requirements.txt
  4. Configure environment variables:

    cp .env.example .env
    # Edit .env with your API keys
  5. Run the RAG server:

    python app.py

๐Ÿ’ป Usage Examples

CLI Data Anonymization

cd api
python main.py --interactive

When prompted, enter your query containing sensitive information. The system will:

  1. Detect and mask sensitive entities
  2. Send the anonymized query to the LLM
  3. De-anonymize the response

Browser Extension

  1. Navigate to any AI service (ChatGPT, Gemini, Grok)
  2. Type your message in the input field
  3. Click the "Encrypt" button before sending
  4. Your message will be encrypted before transmission
  5. Responses will be automatically decrypted

Data Analysis with RAG

  1. Start the RAG server
  2. Upload your CSV or PDF file
  3. Ask natural language questions about your data
  4. Receive analyzed results without exposing sensitive information

๐Ÿ”’ Privacy Architecture

Layer8 implements several layers of protection:

  1. Local Processing: Primary data anonymization happens on your local device
  2. Edge Computing: Process sensitive data at the edge using local LLMs
  3. Secure Enclaves: When using cloud services, data is protected in SGX enclaves
  4. Homomorphic Encryption: (Future) Perform computations on encrypted data

๐Ÿ› ๏ธ Technologies Used

  • Backend: Python, Flask, spaCy, PyTorch
  • Frontend: Svelte, TypeScript, TailwindCSS
  • Extension: JavaScript, Chrome Extensions API
  • Security: Intel SGX, Secure Enclaves, Encryption Libraries
  • AI Models: Support for OpenAI, Anthropic, Google Gemini, and local models

๐Ÿ” Advanced Configuration

Custom Entity Detection

Edit api/config.json to add domain-specific sensitive terms:

"domain_specific_terms": {
  "project": ["Project Alpha", "Operation Phoenix"],
  "product": ["SecretProduct X9"],
  "internal_code": ["XZ-1234", "ACME-7890"]
}

Choosing LLM Providers

Configure which LLM provider to use in api/config.json:

"llm": {
  "provider": "openai",  // Options: "openai", "anthropic", "gemini", "local"
  "model": "gpt-4o",
  "temperature": 0.7,
  "max_tokens": 1000
}

๐Ÿ”ฎ Future Roadmap

  • Homomorphic Encryption: Perform AI operations on encrypted data
  • Multi-user Support: Team-based access controls and sharing
  • Custom Fine-tuning: Create private models tuned to your specific needs
  • Federated Learning: Train models across distributed datasets without sharing data
  • Audit Trails: Comprehensive logging of all AI interactions for compliance

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿค Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

๐Ÿ“ž Support

For questions, issues or feature requests, please open an issue on GitHub or contact the development team.


Secure your data. Empower your AI.
Layer8 - Where privacy meets intelligence

About

Secure AI middleware that sanitizes sensitive data before interacting with AI models and reconstructs responses with original context.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •