Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
b9e1a2e
Add noisy circuit dataset for BP decoding demonstration
ChanceSiyuan Jan 17, 2026
d6573a4
Refactor to proper Python package structure
ChanceSiyuan Jan 17, 2026
133c617
Add Makefile and uv support for automated workflow
ChanceSiyuan Jan 18, 2026
504d0ee
Add GitHub Actions CI/CD workflow for automated testing
ChanceSiyuan Jan 18, 2026
85aedc0
Add test coverage reporting and README badges
ChanceSiyuan Jan 18, 2026
4b8961f
Fix CI: allow uv cache without lock file
ChanceSiyuan Jan 18, 2026
fdcf068
Fix CI: disable uv caching
ChanceSiyuan Jan 18, 2026
86dad3b
Remove PNG visualization files from dataset
ChanceSiyuan Jan 18, 2026
e258fd3
Add syndrome database generation (Issue #5)
ChanceSiyuan Jan 18, 2026
3230244
Add detector error model generation (Issue #4)
ChanceSiyuan Jan 18, 2026
9bddaae
Fix CI: accept bool dtype in syndrome tests
ChanceSiyuan Jan 18, 2026
369de2b
Add comprehensive syndrome dataset documentation
ChanceSiyuan Jan 18, 2026
7472fe7
Add minimum working example and pipeline illustration
ChanceSiyuan Jan 18, 2026
57dd24f
Add getting started guide and demo dataset generator
ChanceSiyuan Jan 19, 2026
c2bdf21
Organize datasets into subdirectories and complete Issues #4 and #5
ChanceSiyuan Jan 19, 2026
bf9d31f
Add UAI format support for probabilistic inference (Issue #4)
ChanceSiyuan Jan 19, 2026
9dbf70f
Consolidate documentation into unified getting started guide
ChanceSiyuan Jan 19, 2026
bc5d57f
Organize UAI files into separate datasets/uais/ directory
ChanceSiyuan Jan 19, 2026
b561964
Organize demonstration code into examples/ directory
ChanceSiyuan Jan 20, 2026
cedee04
Update settings.local.json to expand allowed Bash commands and modify…
ChanceSiyuan Jan 20, 2026
ad71d08
add a notebook
ChanceSiyuan Jan 20, 2026
1458f69
Merge branch 'main' into feat/add-noisy-circuits-dataset
GiggleLiu Jan 20, 2026
d1dc69f
Organize scripts into dedicated scripts/ directory
GiggleLiu Jan 20, 2026
8aa716d
Set up MkDocs documentation with GitHub Pages deployment
GiggleLiu Jan 20, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 28 additions & 0 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
name: Deploy Documentation

on:
push:
branches:
- main
workflow_dispatch:

permissions:
contents: write

jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'

- name: Install dependencies
run: |
pip install mkdocs-material mkdocstrings[python] pymdown-extensions

- name: Deploy to GitHub Pages
run: mkdocs gh-deploy --force
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -38,4 +38,4 @@ uv.lock
*.blg
*.fdb_latexmk
*.synctex.gz
note/belief_propagation_qec_plan.pdf
note/belief_propagation_qec_plan.pdf.claude/settings.local.json
32 changes: 25 additions & 7 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,13 +1,17 @@
.PHONY: help install setup test test-cov generate-dataset clean
.PHONY: help install setup test test-cov generate-dataset generate-dem generate-syndromes docs docs-serve clean

help:
@echo "Available targets:"
@echo " install - Install uv package manager"
@echo " setup - Set up development environment with uv"
@echo " generate-dataset - Generate noisy circuit dataset"
@echo " test - Run tests"
@echo " test-cov - Run tests with coverage report"
@echo " clean - Remove generated files and caches"
@echo " install - Install uv package manager"
@echo " setup - Set up development environment with uv"
@echo " generate-dataset - Generate noisy circuit dataset"
@echo " generate-dem - Generate detector error models"
@echo " generate-syndromes - Generate syndrome database (1000 shots)"
@echo " test - Run tests"
@echo " test-cov - Run tests with coverage report"
@echo " docs - Build documentation"
@echo " docs-serve - Serve documentation locally"
@echo " clean - Remove generated files and caches"

install:
@command -v uv >/dev/null 2>&1 || { \
Expand All @@ -21,12 +25,26 @@ setup: install
generate-dataset:
uv run generate-noisy-circuits --distance 3 --p 0.01 --rounds 3 5 7 --task z --output datasets/noisy_circuits

generate-dem:
uv run generate-noisy-circuits --distance 3 --p 0.01 --rounds 3 5 7 --task z --output datasets/noisy_circuits --generate-dem

generate-syndromes:
uv run generate-noisy-circuits --distance 3 --p 0.01 --rounds 3 5 7 --task z --output datasets/noisy_circuits --generate-syndromes 1000

test:
uv run pytest

test-cov:
uv run pytest --cov=bpdecoderplus --cov-report=html --cov-report=term

docs:
pip install mkdocs-material mkdocstrings[python] pymdown-extensions
mkdocs build

docs-serve:
pip install mkdocs-material mkdocstrings[python] pymdown-extensions
mkdocs serve

clean:
rm -rf .pytest_cache
rm -rf __pycache__
Expand Down
222 changes: 222 additions & 0 deletions datasets/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,222 @@
# Noisy Circuit Dataset (Surface Code, d=3)

Circuit-level surface-code memory experiments generated with Stim for **Belief Propagation (BP) decoding** demonstrations.

## Dataset Organization

The dataset is organized into subdirectories by file type:

```
datasets/
├── circuits/ # Noisy quantum circuits (.stim)
├── dems/ # Detector error models (.dem)
├── uais/ # UAI format for probabilistic inference (.uai)
└── syndromes/ # Syndrome databases (.npz)
```

## Overview

| Parameter | Value |
|-----------|-------|
| Code | Rotated surface code |
| Distance | d = 3 |
| Noise model | i.i.d. depolarizing |
| Error rate | p = 0.01 |
| Task | Z-memory experiment |
| Rounds | 3, 5, 7 |

### Noise Application Points
- Clifford gates (`after_clifford_depolarization`)
- Data qubits between rounds (`before_round_data_depolarization`)
- Resets (`after_reset_flip_probability`)
- Measurements (`before_measure_flip_probability`)

## Files

| File | Description |
|------|-------------|
| `sc_d3_r3_p0010_z.stim` | 3 rounds, p=0.01, Z-memory |
| `sc_d3_r5_p0010_z.stim` | 5 rounds, p=0.01, Z-memory |
| `sc_d3_r7_p0010_z.stim` | 7 rounds, p=0.01, Z-memory |

## Using This Dataset for BP Decoding

### Step 1: Load Circuit and Extract Detector Error Model (DEM)

The Detector Error Model is the key input for BP decoding. It describes which errors trigger which detectors.

```python
import stim
import numpy as np

# Load circuit
circuit = stim.Circuit.from_file("datasets/circuits/sc_d3_r3_p0010_z.stim")

# Extract DEM - this is what BP needs
dem = circuit.detector_error_model(decompose_errors=True)
print(f"Detectors: {dem.num_detectors}") # 24
print(f"Error mechanisms: {dem.num_errors}") # 286
print(f"Observables: {dem.num_observables}") # 1
```

### Step 2: Build Parity Check Matrix H

BP operates on the parity check matrix where `H[i,j] = 1` means error `j` triggers detector `i`.

```python
def build_parity_check_matrix(dem):
"""Convert DEM to parity check matrix H and prior probabilities."""
errors = []
for inst in dem.flattened():
if inst.type == 'error':
prob = inst.args_copy()[0]
dets = [t.val for t in inst.targets_copy() if t.is_relative_detector_id()]
obs = [t.val for t in inst.targets_copy() if t.is_logical_observable_id()]
errors.append({'prob': prob, 'detectors': dets, 'observables': obs})

n_detectors = dem.num_detectors
n_errors = len(errors)

# Parity check matrix
H = np.zeros((n_detectors, n_errors), dtype=np.uint8)
# Prior error probabilities (for BP initialization)
priors = np.zeros(n_errors)
# Which errors flip the logical observable
obs_flip = np.zeros(n_errors, dtype=np.uint8)

for j, e in enumerate(errors):
priors[j] = e['prob']
for d in e['detectors']:
H[d, j] = 1
if e['observables']:
obs_flip[j] = 1

return H, priors, obs_flip

H, priors, obs_flip = build_parity_check_matrix(dem)
print(f"H shape: {H.shape}") # (24, 286)
```

### Step 3: Sample Syndromes (Detection Events)

```python
# Compile sampler
sampler = circuit.compile_detector_sampler()

# Sample detection events + observable flip
n_shots = 1000
samples = sampler.sample(n_shots, append_observables=True)

# Split into syndrome and observable
syndromes = samples[:, :-1] # shape: (n_shots, n_detectors)
actual_obs_flips = samples[:, -1] # shape: (n_shots,)

print(f"Syndrome shape: {syndromes.shape}")
print(f"Example syndrome: {syndromes[0]}")
```

### Step 4: BP Decoding (Pseudocode)

```python
def bp_decode(H, syndrome, priors, max_iter=50, damping=0.5):
"""
Belief Propagation decoder (min-sum variant).

Args:
H: Parity check matrix (n_detectors, n_errors)
syndrome: Detection events (n_detectors,)
priors: Prior error probabilities (n_errors,)
max_iter: Maximum BP iterations
damping: Message damping factor

Returns:
estimated_errors: Most likely error pattern (n_errors,)
soft_output: Log-likelihood ratios (n_errors,)
"""
n_checks, n_vars = H.shape

# Initialize LLRs from priors: LLR = log((1-p)/p)
llr_prior = np.log((1 - priors) / priors)

# Messages: check-to-variable and variable-to-check
# ... BP message passing iterations ...

# Hard decision
estimated_errors = (soft_output < 0).astype(int)

return estimated_errors, soft_output

# Decode each syndrome
for i in range(n_shots):
syndrome = syndromes[i]
estimated_errors, _ = bp_decode(H, syndrome, priors)

# Predict observable flip
predicted_obs_flip = np.dot(estimated_errors, obs_flip) % 2

# Check if decoding succeeded
success = (predicted_obs_flip == actual_obs_flips[i])
```

### Step 5: Evaluate Decoder Performance

After decoding, compare predicted vs actual observable flips to measure logical error rate.

```python
def evaluate_decoder(decoder_fn, circuit, n_shots=10000):
"""Evaluate decoder logical error rate."""
dem = circuit.detector_error_model(decompose_errors=True)
H, priors, obs_flip = build_parity_check_matrix(dem)

sampler = circuit.compile_detector_sampler()
samples = sampler.sample(n_shots, append_observables=True)
syndromes = samples[:, :-1]
actual_obs = samples[:, -1]

errors = 0
for i in range(n_shots):
est_errors, _ = decoder_fn(H, syndromes[i], priors)
pred_obs = np.dot(est_errors, obs_flip) % 2
if pred_obs != actual_obs[i]:
errors += 1

return errors / n_shots

# logical_error_rate = evaluate_decoder(bp_decode, circuit)
```

## Regenerating the Dataset

```bash
# Install the package with uv
uv sync

# Generate circuits using the CLI
python -m bpdecoderplus.cli \
--distance 3 \
--p 0.01 \
--rounds 3 5 7 \
--task z \
--generate-dem \
--generate-uai \
--generate-syndromes 10000
```

## Extending the Dataset

```bash
# Different error rates
python -m bpdecoderplus.cli --p 0.005 --rounds 3 5 7 --generate-dem --generate-uai

# Different distances
python -m bpdecoderplus.cli --distance 5 --rounds 5 7 9 --generate-dem --generate-uai

# X-memory experiment
python -m bpdecoderplus.cli --task x --rounds 3 5 7 --generate-dem --generate-uai
```

## References

- [Stim Documentation](https://github.com/quantumlib/Stim)
- [BP+OSD Decoder Paper](https://arxiv.org/abs/2005.07016)
- [Surface Code Decoding Review](https://quantum-journal.org/papers/q-2024-10-10-1498/)
Loading