Storytuning is a platform that empowers creators to own and monetize AI models through content-based fine-tuning. It addresses the critical issue of intellectual property rights in the age of generative AI by allowing creators to maintain control over their style, content, and the AI models trained on them.
- Frontend: Next.js, React, Material-UI, wagmi
- Backend: Node.js, Express, Firebase
- AI: PyTorch, diffusers, transformers
- Storage: Pinata IPFS
- Blockchain: Story Protocol
- Content Ownership: Creators fully own their fine-tuned models
- IP Registration: Models can be registered as IP on-chain using Story Protocol
- Licensing System: Users must purchase license tokens to use models
- Royalty Distribution: Automatic revenue split (60% creator, 30% platform, 10% AI infrastructure)
- Image Upload and IPFS Storage
- Real-time Fine-tuning Processing via Google Colab
- Image Generation using Fine-tuned Models
Create a .env file in the project root with the following content:
# Firebase Configuration
FIREBASE_DATABASE_URL=
# Pinata Configuration
PINATA_API_KEY=
PINATA_API_SECRET=
PINATA_JWT=
# API Configuration
NEXT_PUBLIC_API_URL=http://localhost:3001# Install Backend Packages
cd backend
npm install
# Install Frontend Packages
cd ../frontend
npm install# Start Backend Server
cd backend
npm run dev
# Start Frontend Server (in a new terminal)
cd frontend
npm run dev- Creator uploads images to the platform
- Images are registered as IP using Story Protocol
- Creator selects images and requests fine-tuning
- Model is trained via Colab-based pipeline
- Fine-tuned model is published to the platform
- Users can browse and purchase license tokens
- Revenue is automatically distributed to all parties
- Never commit
.envfile to GitHub - Keep API keys secure
- Configure environment variables appropriately for production deployment
- Phase 1: Launch MVP and test revenue logic with real users
- Phase 2: Automate fine-tuning backend and scale infrastructure
- Phase 3: Improve model quality and study base model training data
- Phase 4: Collaborate with AI infrastructure providers for high-performance models