Skip to content
/ more Public

An attempt to tackle bottlenecks in video gen ai by combining MoE and MoR transformer architectures, specifically targeting Diffusion Transformer (DiT) video models

License

Notifications You must be signed in to change notification settings

Ash-Blanc/more

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MoRE: Mixture of Recursions + Experts 🚀

A fast, copy‑paste‑friendly starter to explore MoRE ideas: route tokens to experts and apply adaptive recursion based on importance. Use a clean CLI, YAML config, and runnable examples to prototype quickly.

Why you should care

  • Research-ready: minimal but opinionated scaffolding for MoRE experiments
  • Runs in 30 seconds: config + CLI + example, no extra glue code
  • Copy‑paste first: real commands and snippets below

Install (uv — recommended)

  • Install uv:
curl -LsSf https://astral.sh/uv/install.sh | sh
  • One-time project setup (creates .venv and installs deps):
uv venv && . .venv/bin/activate && uv pip install -r requirements.txt
  • Fallback (pip):
pip install -r requirements.txt

Requires Python 3.9+.

30‑second quickstart

  • CLI (with uv):
uv run -m more demo --name "Ada"
  • Routing demo (with uv):
uv run -m more route --scores 0.2 0.5 0.8 0.95 --threshold 0.5 --max-depth 4
  • Zero‑setup one‑liner (uv will provision deps on the fly):
uv run --with pyyaml -m more demo --name "Ada"
  • Programmatic (inside repo):
from more.core import load_config, intro_message, assign_experts_and_recursions

cfg = load_config("config.yaml")
print(intro_message(cfg, name="Ada"))
print(assign_experts_and_recursions([0.2, 0.5, 0.8, 0.95], cfg.routing_threshold, cfg.max_recursion_depth))

Real examples you can copy‑paste

  • Config knobs:
# config.yaml
project:
  name: "MoRE"
  default_name: "Researcher"
messages:
  greeting: "Hello"
routing:
  threshold: 0.5
  max_depth: 4
uv run -m more route --scores 0.1 0.3 0.7 0.9
# → score=0.10 -> expert=0 depth=1
#   score=0.30 -> expert=1 depth=2
#   score=0.70 -> expert=2 depth=3
#   score=0.90 -> expert=3 depth=4
  • Run the example script:
uv run examples/quickstart.py

Project layout

.
├── more/
│   ├── __init__.py
│   ├── __main__.py        # enables `python -m more`
│   ├── cli.py             # argparse CLI with demo + route
│   └── core.py            # config, intro_message, toy expert/recursion routing
├── examples/
│   └── quickstart.py      # MoRE demo example
├── config.yaml            # routing/defaults
├── requirements.txt       # minimal runtime deps (PyYAML)
├── CONTRIBUTING.md        # short contributor guide
└── README.md

What this gives you

  • MoRE‑themed CLI to kickstart routing/recursion experiments
  • YAML config for thresholds and depth
  • Importable API for notebooks and benchmarking

If this saves you time, ⭐️ the repo and send a PR with your improvements!

About

An attempt to tackle bottlenecks in video gen ai by combining MoE and MoR transformer architectures, specifically targeting Diffusion Transformer (DiT) video models

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages