-
Notifications
You must be signed in to change notification settings - Fork 1
feat(api.iam): workspace service #204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…lembic - Add SQLAlchemy 2.0 with asyncpg for async database operations - Add Alembic for schema migrations - Add python-ulid for ULID support instead of UUID - Create read/write engine separation with connection pooling - Create FastAPI dependency injection for database sessions - Create SQLAlchemy declarative base with timestamp mixin - Initialize Alembic with async migration support - Create initial migration for teams table (ULID primary key) - Add comprehensive unit tests for engines and dependencies - Configure Alembic to use settings module for database URL - Enable ruff post-write hook for migration formatting Refs: AIHCM-121
- Add authzed library for SpiceDB integration - Add python-ulid for ULID support - Create ResourceType, RelationType, Permission enums (using Group not Team) - Create AuthorizationProvider protocol for swappable implementations - Implement SpiceDBClient with async methods for relationships and permissions - Create SpiceDB schema (.zed) with Tenant→Workspace→Group hierarchy - Create AuthorizationProbe for domain-oriented observability - Move ObservationContext to shared_kernel (fix architectural boundary) - Add 35 unit tests for types and probes - All 410 tests passing Refs: AIHCM-122
Resolved conflicts in authorization files by accepting remote changes: - shared_kernel/authorization/types.py (docstring fix) - shared_kernel/authorization/spicedb/client.py (_parse_reference helper)
Add foundational infrastructure for the workspace service layer: - Three domain exceptions (DuplicateWorkspaceNameError, CannotDeleteRootWorkspaceError, WorkspaceHasChildrenError) - WorkspaceServiceProbe protocol and DefaultWorkspaceServiceProbe implementation following Domain-Oriented Observability patterns - Export workspace probe classes from observability __init__.py Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
Add WorkspaceService.create_workspace() with proper validation: - Name uniqueness within tenant - Parent workspace existence and tenant ownership - Domain event emission via Workspace.create() factory - Transactional persistence with session.begin() - Observability via WorkspaceServiceProbe Includes 5 unit tests following TDD methodology: - Happy path creation - Duplicate name rejection - Non-existent parent rejection - Cross-tenant parent rejection - Event emission verification Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
Add create_root_workspace method to WorkspaceService with name priority logic (explicit > settings.default_workspace_name > "Root" fallback) and 4 TDD tests covering settings integration, fallback paths, and event emission. Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
Add get_workspace and list_workspaces to WorkspaceService with tenant scoping and domain probe observability. get_workspace enforces tenant boundaries by returning None for cross-tenant access. list_workspaces delegates to repository with scoped tenant ID. Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
WalkthroughThis PR adds workspace support to the IAM application: a new WorkspaceService implementing tenant-scoped create/retrieve/list/delete flows with authorization and persistence orchestration; a WorkspaceServiceProbe Protocol and DefaultWorkspaceServiceProbe for observability; three workspace-specific exceptions (DuplicateWorkspaceNameError, CannotDeleteRootWorkspaceError, WorkspaceHasChildrenError); a DB migration adding a unique constraint on (tenant_id, name); dependency wiring and exports; and comprehensive unit tests covering success and failure scenarios. Sequence Diagram(s)sequenceDiagram
actor Client
participant Service as WorkspaceService
participant Repo as WorkspaceRepository
participant DB as Database
participant Auth as AuthorizationProvider
participant Probe as DefaultWorkspaceServiceProbe
participant Events as EventOutbox
Client->>Service: create_workspace(name, parent_id, creator_id)
Service->>Repo: check_name_unique(tenant_id, name)
alt name exists
Repo-->>Service: duplicate
Service->>Probe: workspace_creation_failed(...)
Service-->>Client: DuplicateWorkspaceNameError
else name available
Repo-->>Service: OK
Service->>Repo: load_parent(parent_id) (if provided)
alt parent invalid or cross-tenant
Repo-->>Service: invalid
Service->>Probe: workspace_creation_failed(...)
Service-->>Client: ValueError
else parent valid
Service->>DB: begin transaction
Service->>Repo: insert workspace
DB-->>Repo: success / IntegrityError
alt IntegrityError (race)
Repo-->>Service: IntegrityError
Service->>Probe: workspace_creation_failed(...)
Service-->>Client: DuplicateWorkspaceNameError
else success
Repo->>Events: queue WorkspaceCreated
Service->>Probe: workspace_created(...)
Service-->>Client: Workspace
end
end
end
sequenceDiagram
actor Client
participant Service as WorkspaceService
participant Repo as WorkspaceRepository
participant DB as Database
participant Auth as AuthorizationProvider
participant Probe as DefaultWorkspaceServiceProbe
participant Events as EventOutbox
Client->>Service: delete_workspace(workspace_id)
Service->>Repo: load_by_id(workspace_id)
alt not found
Repo-->>Service: None
Service-->>Client: False
else found
Repo-->>Service: Workspace
Service->>Auth: ensure_tenant(scope, workspace.tenant_id)
alt unauthorized
Auth-->>Service: Unauthorized
Service-->>Client: UnauthorizedError
else authorized
alt workspace is root
Service->>Probe: workspace_deletion_failed(...)
Service-->>Client: CannotDeleteRootWorkspaceError
else not root
Service->>DB: begin transaction
Service->>Repo: delete(workspace_id)
DB-->>Repo: success / IntegrityError
alt IntegrityError (has children)
Repo-->>Service: IntegrityError
Service->>Probe: workspace_deletion_failed(...)
Service-->>Client: WorkspaceHasChildrenError
else success
Repo->>Events: queue WorkspaceDeleted
Service->>Probe: workspace_deleted(...)
Service-->>Client: True
end
end
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@src/api/iam/application/observability/workspace_service_probe.py`:
- Around line 100-129: The probe logging currently unpacks
ObservationContext.as_dict() into kwargs causing TypeError when context keys
(e.g., tenant_id, user_id) collide with explicit event params; update the merge
logic so context keys are filtered out before unpacking: either add an exclude
parameter to _get_context_kwargs(exclude: Iterable[str]) and call it from
methods like DefaultWorkspaceServiceProbe.workspace_created (and other probe
methods) passing the explicit param names to exclude, or perform an explicit
dict merge in each method that copies self._context.as_dict() and removes any
keys present in the explicit kwargs (workspace_id, tenant_id, name,
parent_workspace_id, is_root, creator_id) before passing **context_kwargs to
self._logger.info; ensure ObservationContext.as_dict() remains unchanged.
In `@src/api/iam/application/services/workspace_service.py`:
- Around line 84-148: Add a DB-level UNIQUE constraint on (tenant_id, name) via
a new migration, then update the create flow in WorkspaceService (the method
that calls _workspace_repository.get_by_name, Workspace.create and
_workspace_repository.save) to catch the DB integrity violation (e.g.
sqlalchemy.exc.IntegrityError) thrown when the UNIQUE constraint is violated,
translate it into DuplicateWorkspaceNameError and call
_probe.workspace_creation_failed with the same tenant_id/name semantics before
re-raising; keep existing DuplicateWorkspaceNameError handling path for
pre-checks so concurrent inserts are handled by the database-level constraint
and surfaced consistently.
src/api/iam/application/observability/workspace_service_probe.py
Outdated
Show resolved
Hide resolved
| try: | ||
| # Check name uniqueness within tenant | ||
| existing = await self._workspace_repository.get_by_name( | ||
| tenant_id=self._scope_to_tenant, | ||
| name=name, | ||
| ) | ||
| if existing: | ||
| raise DuplicateWorkspaceNameError( | ||
| f"Workspace '{name}' already exists in tenant" | ||
| ) | ||
|
|
||
| # Validate parent workspace exists | ||
| parent = await self._workspace_repository.get_by_id(parent_workspace_id) | ||
| if not parent: | ||
| raise ValueError( | ||
| f"Parent workspace {parent_workspace_id.value} does not exist - " | ||
| f"the parent workspace must exist before creating a child" | ||
| ) | ||
|
|
||
| # Validate parent belongs to the scoped tenant | ||
| if parent.tenant_id != self._scope_to_tenant: | ||
| raise ValueError( | ||
| "Parent workspace belongs to different tenant - " | ||
| "cannot create child workspace across tenant boundaries" | ||
| ) | ||
|
|
||
| # Create workspace using domain factory method | ||
| workspace = Workspace.create( | ||
| name=name, | ||
| tenant_id=self._scope_to_tenant, | ||
| parent_workspace_id=parent_workspace_id, | ||
| ) | ||
|
|
||
| # Persist in transaction | ||
| async with self._session.begin(): | ||
| await self._workspace_repository.save(workspace) | ||
|
|
||
| # Probe success | ||
| self._probe.workspace_created( | ||
| workspace_id=workspace.id.value, | ||
| tenant_id=self._scope_to_tenant.value, | ||
| name=name, | ||
| parent_workspace_id=parent_workspace_id.value, | ||
| is_root=False, | ||
| creator_id=creator_id.value, | ||
| ) | ||
|
|
||
| return workspace | ||
|
|
||
| except DuplicateWorkspaceNameError: | ||
| self._probe.workspace_creation_failed( | ||
| tenant_id=self._scope_to_tenant.value, | ||
| name=name, | ||
| error=f"Workspace '{name}' already exists in tenant", | ||
| ) | ||
| raise | ||
| except ValueError: | ||
| raise | ||
| except Exception as e: | ||
| self._probe.workspace_creation_failed( | ||
| tenant_id=self._scope_to_tenant.value, | ||
| name=name, | ||
| error=str(e), | ||
| ) | ||
| raise |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Find workspace model and schema definitions
fd -e py -e sql | grep -E "(workspace|model|schema|migration)" | head -20Repository: openshift-hyperfleet/kartograph
Length of output: 1038
🏁 Script executed:
# Search for UniqueConstraint and uniqueness definitions related to workspaces
rg -n "UniqueConstraint|unique.*name|workspaces.*unique" --type py -A 3 -B 1Repository: openshift-hyperfleet/kartograph
Length of output: 10587
🏁 Script executed:
# Search for IntegrityError handling in the codebase to understand patterns
rg -n "IntegrityError" --type py -B 2 -A 3Repository: openshift-hyperfleet/kartograph
Length of output: 7199
🏁 Script executed:
# Find the workspace_service.py file and inspect it
fd workspace_service.py -x cat -n {}Repository: openshift-hyperfleet/kartograph
Length of output: 50389
🏁 Script executed:
# Get the full migration file for workspace creation
find . -name "205809969bf4_create_workspaces_table.py" -exec cat {} \;Repository: openshift-hyperfleet/kartograph
Length of output: 3013
🏁 Script executed:
# Check workspace model for constraint definitions
sed -n '1,100p' src/api/iam/infrastructure/models/workspace.pyRepository: openshift-hyperfleet/kartograph
Length of output: 2594
🏁 Script executed:
# Search for any unique constraint on workspace name/tenant
rg -n "UniqueConstraint.*name|name.*unique" src/api/iam/infrastructure/models/workspace.py -A 2 -B 2Repository: openshift-hyperfleet/kartograph
Length of output: 57
🏁 Script executed:
# Search for any IntegrityError handling specifically for workspace name uniqueness
rg -n "workspaces_name|idx_workspaces_name" src/api/ -A 3 -B 1Repository: openshift-hyperfleet/kartograph
Length of output: 1898
🏁 Script executed:
# Check if there are tests for concurrent workspace creation or race conditions
rg -n "race|concurrent|duplicate.*workspace.*name" src/api/tests/ -iRepository: openshift-hyperfleet/kartograph
Length of output: 2018
🏁 Script executed:
# Verify workspace repository save implementation
fd workspace_repository.py -x cat -n {} | head -150Repository: openshift-hyperfleet/kartograph
Length of output: 6666
🏁 Script executed:
# Verify no unique constraint exists on (tenant_id, name)
rg -n "UNIQUE|UniqueConstraint" src/api/iam/infrastructure/models/workspace.pyRepository: openshift-hyperfleet/kartograph
Length of output: 57
🏁 Script executed:
# Check if there are any pending migrations that might add the constraint
fd -e py src/api/infrastructure/migrations/versions/ -x grep -l "workspaces.*UNIQUE\|workspaces.*unique" {}Repository: openshift-hyperfleet/kartograph
Length of output: 552
Add DB-level uniqueness constraint for workspace names to prevent concurrent race conditions.
The current get_by_name() check is racy—two concurrent requests can both pass the check and insert duplicates since no unique constraint exists on (tenant_id, name). The migration only creates a non-unique index for query performance. Add a UNIQUE constraint on (tenant_id, name) to the database, then handle the resulting IntegrityError to keep error semantics stable.
🛠️ Recommended approach
- Create a new migration that adds a UNIQUE constraint on (tenant_id, name)
- Update the service to catch and translate IntegrityError:
- # Persist in transaction
- async with self._session.begin():
- await self._workspace_repository.save(workspace)
+ # Persist in transaction
+ try:
+ async with self._session.begin():
+ await self._workspace_repository.save(workspace)
+ except IntegrityError as e:
+ # Handle unique constraint violation on (tenant_id, name)
+ if "workspaces_tenant_id_name_key" in str(e):
+ self._probe.workspace_creation_failed(
+ tenant_id=self._scope_to_tenant.value,
+ name=name,
+ error="Workspace name already exists in tenant",
+ )
+ raise DuplicateWorkspaceNameError(
+ f"Workspace '{name}' already exists in tenant"
+ ) from e
+ raise🤖 Prompt for AI Agents
In `@src/api/iam/application/services/workspace_service.py` around lines 84 - 148,
Add a DB-level UNIQUE constraint on (tenant_id, name) via a new migration, then
update the create flow in WorkspaceService (the method that calls
_workspace_repository.get_by_name, Workspace.create and
_workspace_repository.save) to catch the DB integrity violation (e.g.
sqlalchemy.exc.IntegrityError) thrown when the UNIQUE constraint is violated,
translate it into DuplicateWorkspaceNameError and call
_probe.workspace_creation_failed with the same tenant_id/name semantics before
re-raising; keep existing DuplicateWorkspaceNameError handling path for
pre-checks so concurrent inserts are handled by the database-level constraint
and surfaced consistently.
…traint for workspace names Resolve two issues: 1. Fix _get_context_kwargs in DefaultWorkspaceServiceProbe to accept an exclude parameter, preventing TypeError when ObservationContext keys collide with explicit method parameters (e.g. tenant_id, name). All 7 probe methods now filter their explicit parameter names. 2. Add DB-level unique constraint (uq_workspaces_tenant_name) on (tenant_id, name) to prevent race conditions in concurrent workspace creation. Both create_workspace and create_root_workspace now catch IntegrityError and convert to DuplicateWorkspaceNameError. Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@src/api/infrastructure/migrations/versions/2551292512f6_add_unique_constraint_on_workspace_name_.py`:
- Around line 20-26: In the upgrade() function before calling
op.create_unique_constraint("uq_workspaces_tenant_name", "workspaces",
["tenant_id", "name"]), add a preflight query against the workspaces table to
detect duplicate (tenant_id, name) pairs (e.g., GROUP BY tenant_id, name HAVING
COUNT(*) > 1); if any duplicates are found raise a clear exception
(RuntimeError) that includes the offending tenant_id/name values so the
migration fails fast with an actionable message (alternatively implement an
explicit dedupe step to resolve duplicates before creating the constraint).
| def upgrade() -> None: | ||
| """Add unique constraint on (tenant_id, name) for workspaces.""" | ||
| op.create_unique_constraint( | ||
| "uq_workspaces_tenant_name", | ||
| "workspaces", | ||
| ["tenant_id", "name"], | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a preflight check for existing duplicate (tenant_id, name) pairs.
If duplicates exist, this migration will fail at deploy time with a DB error. Consider failing fast with a clear message (or auto-deduping) before creating the constraint. Line 20–26.
🛠️ Example preflight check
+import sqlalchemy as sa
+
def upgrade() -> None:
"""Add unique constraint on (tenant_id, name) for workspaces."""
+ conn = op.get_bind()
+ dup = conn.execute(
+ sa.text(
+ """
+ SELECT tenant_id, name, COUNT(*) AS cnt
+ FROM workspaces
+ GROUP BY tenant_id, name
+ HAVING COUNT(*) > 1
+ LIMIT 1
+ """
+ )
+ ).fetchone()
+ if dup:
+ raise RuntimeError(
+ "Duplicate workspaces found for (tenant_id, name); "
+ "resolve before applying uq_workspaces_tenant_name."
+ )
op.create_unique_constraint(
"uq_workspaces_tenant_name",
"workspaces",
["tenant_id", "name"],
)🤖 Prompt for AI Agents
In
`@src/api/infrastructure/migrations/versions/2551292512f6_add_unique_constraint_on_workspace_name_.py`
around lines 20 - 26, In the upgrade() function before calling
op.create_unique_constraint("uq_workspaces_tenant_name", "workspaces",
["tenant_id", "name"]), add a preflight query against the workspaces table to
detect duplicate (tenant_id, name) pairs (e.g., GROUP BY tenant_id, name HAVING
COUNT(*) > 1); if any duplicates are found raise a clear exception
(RuntimeError) that includes the offending tenant_id/name values so the
migration fails fast with an actionable message (alternatively implement an
explicit dedupe step to resolve duplicates before creating the constraint).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not relevant, no systems are in production that would require this migration step.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jsell-rh, understood! Since there are no production systems with potentially conflicting data, the preflight check isn't necessary. Thanks for clarifying the deployment context.
Summary by CodeRabbit
New Features
Chores
Tests