-
Notifications
You must be signed in to change notification settings - Fork 0
Testing
forge18 edited this page Dec 7, 2025
·
1 revision
Testing guidelines and practices for LPM.
Unit tests are co-located with the code they test:
// src/package/manifest.rs
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_manifest() {
// Test code
}
}Run unit tests:
cargo testIntegration tests are in tests/:
-
integration_tests.rs: CLI command tests -
security.rs: Security-focused tests
Run integration tests:
cargo test --test integration_tests
cargo test --test securityPerformance benchmarks are in benches/:
-
performance_benchmarks.rs: Critical path benchmarks
Run benchmarks:
cargo bench#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_version_parsing() {
let v = Version::parse("1.2.3").unwrap();
assert_eq!(v.major, 1);
assert_eq!(v.minor, 2);
assert_eq!(v.patch, 3);
}
#[test]
#[should_panic(expected = "Invalid version")]
fn test_invalid_version() {
Version::parse("invalid").unwrap();
}
}// tests/integration_tests.rs
#[test]
fn test_lpm_init() {
let temp_dir = tempfile::tempdir().unwrap();
let output = Command::new("cargo")
.args(&["run", "--", "init"])
.current_dir(temp_dir.path())
.output()
.unwrap();
assert!(output.status.success());
assert!(temp_dir.path().join("package.yaml").exists());
}- Happy paths: Normal operation
- Error cases: Invalid input, missing files, network errors
- Edge cases: Empty inputs, boundary values
- Security: Checksum verification, sandboxing
- Third-party library functionality
- Standard library behavior
- Obvious getters/setters (unless complex)
cargo testcargo test test_namecargo test -- --nocapturecargo test install # Run tests with "install" in namecargo test --lib # Only library testsUse tempfile crate for temporary test data:
use tempfile::TempDir;
let temp_dir = TempDir::new().unwrap();
let file_path = temp_dir.path().join("test.yaml");For external dependencies (HTTP, file system), consider:
- Dependency injection
- Test doubles
- Feature flags for test mode
Tests run automatically on:
- Every push
- Every pull request
- Multiple platforms (Linux, macOS, Windows)
# Format check
cargo fmt --check
# Clippy
cargo clippy -- -D warnings
# Tests
cargo test
# Build
cargo build --releaseAdd benchmarks for critical paths:
// benches/performance_benchmarks.rs
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn benchmark_version_parse(c: &mut Criterion) {
c.bench_function("parse_version", |b| {
b.iter(|| Version::parse(black_box("1.2.3")))
});
}Run:
cargo bench- Test names: Clear, descriptive names
- One assertion: One concept per test
- Arrange-Act-Assert: Structure tests clearly
- Test data: Use realistic test data
- Cleanup: Clean up test artifacts
- Isolation: Tests should not depend on each other
- Fast: Keep tests fast (use mocks for slow operations)
#[test]
fn test_something() {
let value = compute();
dbg!(&value); // Prints to stderr
assert_eq!(value, expected);
}- Set breakpoint in test
- Run test in debugger
- Step through code
Consider using cargo watch for continuous testing:
cargo install cargo-watch
cargo watch -x testRuns tests automatically on file changes.