I design decision architectures for human-AI systems—who decides what, when, and with what authority.
This work grows from 15+ years in service design, civic tech, and HCI research. Most system failures aren't interface problems—they're judgment routing problems. Wrong actor, wrong information, wrong authority structure.
Research areas:
- Preference architecture: Constraint design that reveals true intent (Stratified Preference Allocation)
- Trust infrastructure: Portable reputation and shared moderation (Glowrm)
- Agentic economics: The hidden costs of AI decision-making (Occupant Index)
- Disclosure systems: Making AI decisions legible for public accountability (Tardigrade, AI Statements)
- Delegation frameworks: When humans stay in the loop, when they don't (Judgment Routing)
The core insight: Traditional design assumes human decision-makers. AI doesn't just change the interface—it changes who decides. My work formalizes what designers used to do intuitively: structuring authority, information flow, and trust.
- Assistant Professor of Practice in Urban Technology, Taubman College, University of Michigan
- Board Member, State Capacity AI
- State Tennis Co-Chair, OACA (2023–)
- Founding Director, PDX Digital Corps (2025)
- Board President, AIGA Portland (2023–25)
- Co-Founder, Portland Design Month (2024)
Stratified Preference Allocation (SPA) — Matching architecture that replaces unlimited swipes with scarce, tiered signals. Forces honest preference revelation, reduces spam. Works across dating, hiring, networking—anywhere matching happens.
Reference implementations: HeyPBJ (dating) · Leafroll (professional networking)
Glowrm — Trust infrastructure for ATProto: portable reputation, shared moderation, SPA-based resource allocation
KizuKizu — Relational profiling around connection archetypes. How you connect, not just who you are.
Judgment Routing — Prototypes for delegated authority in agentic systems. When should a machine decide? When should a human?
Occupant Index — AI cost intelligence tracking the "Judgment Premium" in reasoning model pricing
Reasoning-class models (o1, DeepSeek-R1) have different economics than commodity inference. The Occupant Index tracks what aggregate "AI cost deflation" narratives miss: organizations building on AI judgment are taking an unhedged position on pricing that isn't deflating like they expect.
Portland Digital Corps — Civic tech sprint. Six Oregon organizations, 100+ participants, shipped projects.
GitHub · Final Report
Design For The Public 2024 — Two-day conference on public interest design and service delivery
Tardigrade �� Open disclosure patterns for AI systems: badges, contestability, human oversight indicators
AI Statements — Standardized AI usage disclosures
Consequence Design — Framework for post-deployment harms, institutional erosion, and the DIRE methodology
Oregon HS Tennis Rankings — Power Index rankings, playoff simulator, league analysis
A complete analytical system: adjusted win percentage (APR), strength of schedule (SOS), flight-weighted scoring (FWS). Fair, transparent team rankings. Built because ranking systems are decision architectures—they encode what we value and who gets opportunities.
Bluesky · Literal · Record Club · Last.fm · Mastodon · Mixcloud