中文 | English
A high-performance Base94 encoding/decoding Rust implementation based on PyO3, 14-40 times faster than the native Python version.
- ⚡ Blazing-Fast Processing: Core algorithms optimized with Rust
- 🔄 Seamless Compatibility: Fully compatible with the pure Python version
- 🛡️ Memory Safety: Backed by Rust's language-level memory safety guarantees
- 📦 Simple API: Two intuitive functions -
b94encodeandb94decode - 🪁 Multiple Options: Includes both
base72andbase94encoding methods internally
- Rust toolchain (1.74+)
- Python 3.8+
- maturin (
pip install maturin)
# Install stable version
pip install 'base94-rs'
# Install development version
pip install git+https://github.com/hibays/base94.git# Clone the repository
git clone https://github.com/hibays/base94.git
cd base94
# Compile and install
pip install .import base94
# Encoding example
data = b"Hello Base94!"
encoded = base94.b94encode(data)
print(f"Encoded: {encoded}") # Output: b'4Tk7J#qZcjYw'
# Decoding example
decoded = base94.b94decode(encoded)
print(f"Decoded: {decoded}") # Output: b'Hello Base94!'| Data Size | Implementation | Encoding Time (s) | Decoding Time (s) | Encoding Speed | Decoding Speed |
|---|---|---|---|---|---|
| 10KB | Python Native | 0.0088 | 0.0067 | 1.11 MB/s | 1.45 MB/s |
| 10KB | Rust Accelerated | 0.0003 | 0.0001 | 31.75 MB/s | 70.26 MB/s |
| 100KB | Python Native | 0.0523 | 0.0704 | 1.87 MB/s | 1.39 MB/s |
| 100KB | Rust Accelerated | 0.0035 | 0.0014 | 28.13 MB/s | 72.17 MB/s |
| 1MB | Python Native | 0.5254 | 0.7434 | 1.90 MB/s | 1.35 MB/s |
| 1MB | Rust Accelerated | 0.0388 | 0.0220 | 25.79 MB/s | 45.54 MB/s |
| 10MB | Python Native | 5.5060 | 7.6613 | 1.82 MB/s | 1.31 MB/s |
| 10MB | Rust Accelerated | 0.3819 | 0.2030 | 26.19 MB/s | 49.27 MB/s |
Test Environment: i7-13620H @ 2.4GHz, 32GB DDR5 RAM
- Precomputed Lookup Tables:
lazy_staticfor faster character mapping - Block-Level Parallelism: Lock-free processing of encoding blocks
- Zero Heap Allocation: Fully stack-based memory operations
graph TD
%% Start of encoding workflow
A[Input Byte Stream] --> B{Padding Handling}
B -->|Zero Padding| C[Block Processing: 9bytes/chunk]
C --> D[Convert to 128-bit Integer]
D --> E[Base94 Decomposition]
E --> F[Lookup Table Encoding]
F --> G[Output Base94 String]
%% Python Binding Section
H[Python calls b94encode] --> I{Auto-Implementation Selection}
I -->|Python Native| J[py_b94encode]
I -->|Rust Accelerated| K[rs_b94encode]
%% Encoding algorithm & optimization details
L[Precomputed Lookup Tables] --> M[SIMD Memory Layout]
M --> N[Block-Level Parallelism]
N --> O[Zero Heap Allocation]
O --> G
%% Final encoding result
G --> P[Final Encoded Result]
%% Start of decoding workflow
Q[Input Base94 String] --> R{Padding Handling}
R -->|Zero Padding| S[Block Processing: 11bytes/chunk]
S --> T[Base94 Character Mapping]
T --> U[Combine into 9 Bytes]
U --> V[Output Decoded Byte Stream]
%% Python Binding Section
W[Python calls b94decode] --> X{Auto-Implementation Selection}
X -->|Python Native| Y[py_b94decode]
X -->|Rust Accelerated| Z[rs_b94decode]
%% Decoding algorithm & optimization details
AA[Precomputed Lookup Tables] --> BB[SIMD Memory Layout]
BB --> CC[Block-Level Parallelism]
CC --> DD[Zero Heap Allocation]
DD --> V
%% Final decoding result
V --> EE[Final Decoded Result]
# Run unit tests
python -m pytest
# Run performance benchmarks
python -m python.benchmarksIt is recommended to use
uvfor virtual environment management.
# Create a virtual environment
uv venv
# Install dependencies
uv pip install maturin twine
# Install for local testing
maturin develop --release
# Package and publish
rm dist/* && uv build && twine upload dist/*Pull Requests are welcome! Recommended workflow:
- Fork the repository
- Create a feature branch (
git checkout -b feature) - Commit your changes (
git commit -am 'Add feature') - Push to the branch (
git push origin feature) - Create a Pull Request