The 3D Object Generation Blueprint is an end-to-end generative AI workflow that allows users to prototype 3D scenes quickly by simply describing the scene. The Blueprint takes a user's 3D scene idea, generates object recommendations, associated prompts and previews using a Llama 3.1 8B LLM and NVIDIA SANA, and ready-to-use 3D objects with Microsoft TRELLIS.
This blueprint supports the following NVIDIA GPUs: RTX 5090, RTX 5080, RTX 4090, RTX 4080, RTX 6000 Ada. We're planning to add wider GPU support in the near future. We recommend at least 48 GB of system RAM.
- Chat interface for scene planning
- AI-assisted object and prompt generation
- Automatic 3D asset generation from text prompts
- Blender import functionality for generated assets
- GPU memory management - Intelligent model loading/unloading
- VRAM management with model termination
Before you begin, ensure you have:
- Windows 10/11
- NVIDIA GPU (RTX 4080 or higher recommended)
- CUDA Toolkit 12.8 - Install from NVIDIA CUDA 12.8 Downloads
- ~50GB disk space for AI models
- HuggingFace Account (free) - Required for downloading some models. Create an account at huggingface.co and generate an access token at huggingface.co/settings/tokens
Note: gsplat is installed from a prebuilt wheel with CUDA kernels precompiled for Ampere (sm_86), Ada (sm_89), and Blackwell (sm_120). No JIT compilation or build tools are required.
Step 1: Clone the repository with submodules:
git clone --recurse-submodules https://github.com/NVIDIA-AI-Blueprints/3d-object-generation.gitStep 2: Set the required environment variables:
# PowerShell
$env:CUDA_HOME = "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8"
$env:HF_TOKEN = "your_huggingface_token_here"# Command Prompt
set CUDA_HOME=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8
set HF_TOKEN=your_huggingface_token_hereNote: Adjust the
CUDA_HOMEpath if your CUDA 12.8 is installed in a different location.
Step 3: Open PowerShell as Administrator and navigate to this repository. Run the automated installation script. Select (n) if prompted to re-clone the repo:
.\setup_environment.ps1The setup script will automatically:
- Install Git and Git LFS (if not present)
- Install Miniconda (if not present)
- Create and configure the Conda environment
- Install all dependencies and CUDA extensions
- Download required AI models (~50GB)
Optional Parameters:
.\setup_environment.ps1 -InstallPath "D:\my-custom-path" -CondaEnvName "myenv"| Parameter | Default | Description |
|---|---|---|
-InstallPath |
C:\3d-object-generation |
Where to install the project |
-CondaEnvName |
3dwithtrellis |
Name of the Conda environment |
Note: The installation process may take 30-60 minutes depending on your internet connection and hardware.
This blueprint requires Blender 4.2+ for the add-on integration. You can download and install manually from:
Or via winget:
winget install --id 9NW1B444LDLW
Note: Blender can be installed before or after the main setup. The installation script will automatically copy the add-ons to your Blender installation.
You can customize the following settings in config.py:
# Model name from HuggingFace
# Qwen3-4B: "Qwen/Qwen3-4B" (4B params)
# Llama-3.1-8B: "meta-llama/Llama-3.1-8B-Instruct" (8B params)
NATIVE_LLM_MODEL = "Qwen/Qwen3-4B"
# Precision options: "float16", "bfloat16", "float32", "int4" (for GPTQ)
NATIVE_LLM_PRECISION = "bfloat16"The application automatically manages GPU memory across three models:
- LLM (Qwen3-4B or Llama 3.1 8B)
- SANA (Image generation)
- TRELLIS (3D generation)
Memory Management Strategy:
- All models are pre-loaded at startup
- Models are moved to CPU when not actively in use
- Only one model runs on GPU at a time
- GPU cache is cleared between model switches
There are two ways to run the application:
| Method | Best For | Description |
|---|---|---|
| Blender Add-on (Recommended) | 3D artists using Blender | Start services from within Blender, with integrated asset import |
| Standalone | Testing or non-Blender workflows | Run python app.py manually from command line |
Both methods launch the same Gradio web interface. If you're working in Blender, use the add-on — there's no need to run python app.py separately.
The 3D Object Generation add-on launches and manages all services directly from Blender.
- Open Blender
- Go to Edit → Preferences → Add-ons
-
- Enable 3D Object Generation and Asset Importer by checking the boxes
- Expand the 3D Object Generation add-on and set Blueprint Base Path to your installation directory (e.g.,
E:\3d-object-generation) -
- In the 3D Viewport, press N to open the sidebar
- Click the 3D Object Generation tab
-
- (Optional) Open the system console for monitoring: Window → Toggle System Console
- Click Start Services — this launches the LLM, SANA, and TRELLIS services (may take up to 3 minutes)
- Once all services show READY, click Open 3D Object Generation UI
-
To stop services and free GPU memory, click Services Started .. Click to Terminate.
If you prefer to run the application outside of Blender:
- Open a new terminal and run:
conda activate 3dwithtrellis
cd C:\3d-object-generation
python app.py
- Open your browser to the URL shown in the terminal (typically http://127.0.0.1:7860/)
đź’ˇ Recommended: For the best experience, use the light theme by accessing the application with: http://127.0.0.1:7860/?__theme=light
To stop the application and free VRAM, simply press Ctrl+C in the terminal where the application is running.
Once the application is running, you can:
-
Scene Planning:
- Describe your desired scene in natural language
-
Asset Generation:
-
The LLM will automatically create prompts for suggested items which will be sent to the 2D image generator
-
Each image contains additional controls
Refresh - Generate a new image based on the existing prompt.
Edit - Edit the prompt an generate a new image.
Delete - Remove the image from the gallery display.
Generate 3D - Generate a 3D object from the image.
- The color of the Generate 3D button indicates the status of 3D generation for that object
Object has not been queued for 3D generation.
Object has been queued for 3D generation, but object generation has not completed.
3D model has been generated for this object.
Object has been flagged by guardrails as potentially inappropriate, 3D object will not be generated.
-
-
Convert all images to 3D Objects (Delete unwanted images before converting to 3D)
-
NOTE: Image to 3D Object processing takes up to 45 seconds per object on a RTX 5090, when using the Convert All image option this time will be a multiple of the number of objects being converted, using the Convert All option may take a significant amount of time. The UI will not be updated until all objects have been converted.
-
Save Objects:
- The Export Objects to File allows saving the generated objects to a folder.
-
Blender Integration:
- Import generated assets directly into Blender
-
- Use the Asset Importer add-on and select the desired scene folder, and click Import assets
-
- Assets are imported and the asset tag is applied, saving the scene to the %userprofile%\Documents\Blender\assets folder will add the imported objects to the Blender asset browser.
-
- Continue working with the assets in your 3D workflow
- Can be used with 3D Guided Gen AI BP
# =============================================================================
# LLM Settings
# =============================================================================
NATIVE_LLM_MODEL = "Qwen/Qwen3-4B" # HuggingFace model ID
NATIVE_LLM_PRECISION = "bfloat16" # float16, bfloat16, or int4 (for GPTQ)
# =============================================================================
# Logging
# =============================================================================
VERBOSE = False # Detailed timing/memory logs-
CUDA not found during installation
- Ensure CUDA 12.8 is installed from NVIDIA CUDA Downloads
- If the installer can't auto-detect CUDA, set
CUDA_HOMEbefore runninginstall.bat:# PowerShell $env:CUDA_HOME = "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8"
# Command Prompt set CUDA_HOME=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8
- Verify your CUDA installation path:
Get-ChildItem "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\"
-
Out of VRAM
- Use a smaller LLM model: Qwen3-4B instead of Llama-3.1-8B
- Close other GPU-using applications
- The application automatically moves inactive models to CPU
-
Slow LLM inference
- Ensure using Gradio 5.x (not 6.x)
- Check
requirements.txthasgradio==5.50.0
-
Model download fails
- Set HuggingFace token:
set HF_TOKEN=your_token - Check internet connection
- Set HuggingFace token:
-
TRELLIS import errors
- Ensure submodules are initialized:
git submodule update --init --recursive
- Ensure submodules are initialized:
-
Installation Issues:
- Run PowerShell as Administrator
- Check if Python is in your system PATH
- Verify Visual Studio Build Tools installation
- Application logs: Console output
- Verbose logging: Set
VERBOSE = Trueinconfig.py
- TRELLIS - Microsoft's 3D generation model
- Qwen3 - Alibaba's LLM
- SANA - NVIDIA's image generation model
- Gradio - Web interface framework
Apache 2.0 - See LICENSE for details.