Validate your pandas and Polars DataFrames at runtime with simple Python decorators. Daffy catches missing columns, wrong data types, and invalid values before they cause downstream errors in your data pipeline.
Also supports Modin and PyArrow DataFrames.
- ✅ Column & dtype validation — lightweight, minimal overhead
- ✅ Value constraints — nullability, uniqueness, range checks
- ✅ Row validation with Pydantic — when you need deeper checks
- ✅ Works with pandas, Polars, Modin, PyArrow — no lock-in
pip install daffyor with conda:
conda install -c conda-forge daffyWorks with whatever DataFrame library you already have installed. Python 3.9–3.14.
from daffy import df_in, df_out
@df_in(columns=["price", "bedrooms", "location"])
@df_out(columns=["price_per_room", "price_category"])
def analyze_housing(houses_df):
# Transform raw housing data into price analysis
return analyzed_dfIf a column is missing, has wrong dtype, or violates a constraint — Daffy fails fast with a clear error message at the function boundary.
Most DataFrame validation tools are schema-first (define schemas separately) or pipeline-wide (run suites over datasets). Daffy is decorator-first: validate inputs and outputs where transformations happen.
| Non-intrusive | Just add decorators — no refactoring, no custom DataFrame types, no schema files |
| Easy to adopt | Add in 30 seconds, remove just as fast if needed |
| In-process | No external stores, orchestrators, or infrastructure |
| Pay for what you use | Column validation is essentially free; opt into row validation when needed |
from daffy import df_in, df_out
@df_in(columns=["Brand", "Price"])
@df_out(columns=["Brand", "Price", "Discount"])
def apply_discount(df):
df = df.copy()
df["Discount"] = df["Price"] * 0.1
return dfMatch dynamic column names with regex patterns:
@df_in(columns=["id", "r/feature_\\d+/"])
def process_features(df):
return dfVectorized checks with zero row iteration overhead:
@df_in(columns={
"price": {"checks": {"gt": 0, "lt": 10000}},
"status": {"checks": {"isin": ["active", "pending", "closed"]}},
"email": {"checks": {"str_regex": r"^[^@]+@[^@]+\.[^@]+$"}},
})
def process_orders(df):
return dfAvailable checks: gt, ge, lt, le, between, eq, ne, isin, notnull, str_regex
@df_in(
columns=["user_id", "email", "age"],
nullable={"email": False}, # email cannot be null
unique=["user_id"], # user_id must be unique
)
def clean_users(df):
return dfFor complex, cross-field validation (requires pydantic>=2.4.0):
from pydantic import BaseModel, Field
from daffy import df_in
class Product(BaseModel):
name: str
price: float = Field(gt=0)
stock: int = Field(ge=0)
@df_in(row_validator=Product)
def process_inventory(df):
return df| Use Case | Daffy | Pandera | Great Expectations |
|---|---|---|---|
| Function boundary guardrails | ✅ Primary focus | ❌ Not designed for | |
| Quick column/type checks | ✅ Lightweight | ||
| Complex statistical checks | ✅ Extensive | ✅ Extensive | |
| Pipeline/warehouse QA | ❌ Not designed for | ✅ Primary focus | |
| Multi-backend support | ✅ | ✅ |
Configure Daffy project-wide via pyproject.toml:
[tool.daffy]
strict = trueFull documentation available at daffy.readthedocs.io
- Getting Started — quick introduction
- Usage Guide — comprehensive reference
- API Reference — decorator signatures
- Changelog — version history
Issues and pull requests welcome on GitHub.
MIT