Skip to content

Conversation

@jsell-rh
Copy link
Collaborator

@jsell-rh jsell-rh commented Feb 9, 2026

Summary by CodeRabbit

  • New Features

    • Workspace management service: create (including root), retrieve, list, delete with tenant-scoped isolation.
    • Clear error cases for duplicate names, root deletion, and deletion when children exist.
  • Chores

    • Enforced unique (tenant, name) constraint at DB level.
    • Added observability probes and updated dependency wiring for workspace services.
  • Tests

    • Comprehensive unit tests covering success, error, tenant isolation, and observability events.

…lembic

- Add SQLAlchemy 2.0 with asyncpg for async database operations
- Add Alembic for schema migrations
- Add python-ulid for ULID support instead of UUID
- Create read/write engine separation with connection pooling
- Create FastAPI dependency injection for database sessions
- Create SQLAlchemy declarative base with timestamp mixin
- Initialize Alembic with async migration support
- Create initial migration for teams table (ULID primary key)
- Add comprehensive unit tests for engines and dependencies
- Configure Alembic to use settings module for database URL
- Enable ruff post-write hook for migration formatting

Refs: AIHCM-121
- Add authzed library for SpiceDB integration
- Add python-ulid for ULID support
- Create ResourceType, RelationType, Permission enums (using Group not Team)
- Create AuthorizationProvider protocol for swappable implementations
- Implement SpiceDBClient with async methods for relationships and permissions
- Create SpiceDB schema (.zed) with Tenant→Workspace→Group hierarchy
- Create AuthorizationProbe for domain-oriented observability
- Move ObservationContext to shared_kernel (fix architectural boundary)
- Add 35 unit tests for types and probes
- All 410 tests passing

Refs: AIHCM-122
Resolved conflicts in authorization files by accepting remote changes:
- shared_kernel/authorization/types.py (docstring fix)
- shared_kernel/authorization/spicedb/client.py (_parse_reference helper)
jsell-rh and others added 21 commits January 27, 2026 17:43
Add foundational infrastructure for the workspace service layer:
- Three domain exceptions (DuplicateWorkspaceNameError,
  CannotDeleteRootWorkspaceError, WorkspaceHasChildrenError)
- WorkspaceServiceProbe protocol and DefaultWorkspaceServiceProbe
  implementation following Domain-Oriented Observability patterns
- Export workspace probe classes from observability __init__.py

Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
Add WorkspaceService.create_workspace() with proper validation:
- Name uniqueness within tenant
- Parent workspace existence and tenant ownership
- Domain event emission via Workspace.create() factory
- Transactional persistence with session.begin()
- Observability via WorkspaceServiceProbe

Includes 5 unit tests following TDD methodology:
- Happy path creation
- Duplicate name rejection
- Non-existent parent rejection
- Cross-tenant parent rejection
- Event emission verification

Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
Add create_root_workspace method to WorkspaceService with name priority
logic (explicit > settings.default_workspace_name > "Root" fallback)
and 4 TDD tests covering settings integration, fallback paths, and
event emission.

Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
Add get_workspace and list_workspaces to WorkspaceService with tenant
scoping and domain probe observability. get_workspace enforces tenant
boundaries by returning None for cross-tenant access. list_workspaces
delegates to repository with scoped tenant ID.

Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
@jsell-rh jsell-rh self-assigned this Feb 9, 2026
@coderabbitai
Copy link

coderabbitai bot commented Feb 9, 2026

Walkthrough

This PR adds workspace support to the IAM application: a new WorkspaceService implementing tenant-scoped create/retrieve/list/delete flows with authorization and persistence orchestration; a WorkspaceServiceProbe Protocol and DefaultWorkspaceServiceProbe for observability; three workspace-specific exceptions (DuplicateWorkspaceNameError, CannotDeleteRootWorkspaceError, WorkspaceHasChildrenError); a DB migration adding a unique constraint on (tenant_id, name); dependency wiring and exports; and comprehensive unit tests covering success and failure scenarios.

Sequence Diagram(s)

sequenceDiagram
    actor Client
    participant Service as WorkspaceService
    participant Repo as WorkspaceRepository
    participant DB as Database
    participant Auth as AuthorizationProvider
    participant Probe as DefaultWorkspaceServiceProbe
    participant Events as EventOutbox

    Client->>Service: create_workspace(name, parent_id, creator_id)
    Service->>Repo: check_name_unique(tenant_id, name)
    alt name exists
        Repo-->>Service: duplicate
        Service->>Probe: workspace_creation_failed(...)
        Service-->>Client: DuplicateWorkspaceNameError
    else name available
        Repo-->>Service: OK
        Service->>Repo: load_parent(parent_id) (if provided)
        alt parent invalid or cross-tenant
            Repo-->>Service: invalid
            Service->>Probe: workspace_creation_failed(...)
            Service-->>Client: ValueError
        else parent valid
            Service->>DB: begin transaction
            Service->>Repo: insert workspace
            DB-->>Repo: success / IntegrityError
            alt IntegrityError (race)
                Repo-->>Service: IntegrityError
                Service->>Probe: workspace_creation_failed(...)
                Service-->>Client: DuplicateWorkspaceNameError
            else success
                Repo->>Events: queue WorkspaceCreated
                Service->>Probe: workspace_created(...)
                Service-->>Client: Workspace
            end
        end
    end
Loading
sequenceDiagram
    actor Client
    participant Service as WorkspaceService
    participant Repo as WorkspaceRepository
    participant DB as Database
    participant Auth as AuthorizationProvider
    participant Probe as DefaultWorkspaceServiceProbe
    participant Events as EventOutbox

    Client->>Service: delete_workspace(workspace_id)
    Service->>Repo: load_by_id(workspace_id)
    alt not found
        Repo-->>Service: None
        Service-->>Client: False
    else found
        Repo-->>Service: Workspace
        Service->>Auth: ensure_tenant(scope, workspace.tenant_id)
        alt unauthorized
            Auth-->>Service: Unauthorized
            Service-->>Client: UnauthorizedError
        else authorized
            alt workspace is root
                Service->>Probe: workspace_deletion_failed(...)
                Service-->>Client: CannotDeleteRootWorkspaceError
            else not root
                Service->>DB: begin transaction
                Service->>Repo: delete(workspace_id)
                DB-->>Repo: success / IntegrityError
                alt IntegrityError (has children)
                    Repo-->>Service: IntegrityError
                    Service->>Probe: workspace_deletion_failed(...)
                    Service-->>Client: WorkspaceHasChildrenError
                else success
                    Repo->>Events: queue WorkspaceDeleted
                    Service->>Probe: workspace_deleted(...)
                    Service-->>Client: True
                end
            end
        end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'workspace service' directly aligns with the main change: a comprehensive new WorkspaceService implementation with supporting observability probes, domain exceptions, and database migrations.
Docstring Coverage ✅ Passed Docstring coverage is 95.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch jsell/feat/AIHCM-144-workspace-service

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@src/api/iam/application/observability/workspace_service_probe.py`:
- Around line 100-129: The probe logging currently unpacks
ObservationContext.as_dict() into kwargs causing TypeError when context keys
(e.g., tenant_id, user_id) collide with explicit event params; update the merge
logic so context keys are filtered out before unpacking: either add an exclude
parameter to _get_context_kwargs(exclude: Iterable[str]) and call it from
methods like DefaultWorkspaceServiceProbe.workspace_created (and other probe
methods) passing the explicit param names to exclude, or perform an explicit
dict merge in each method that copies self._context.as_dict() and removes any
keys present in the explicit kwargs (workspace_id, tenant_id, name,
parent_workspace_id, is_root, creator_id) before passing **context_kwargs to
self._logger.info; ensure ObservationContext.as_dict() remains unchanged.

In `@src/api/iam/application/services/workspace_service.py`:
- Around line 84-148: Add a DB-level UNIQUE constraint on (tenant_id, name) via
a new migration, then update the create flow in WorkspaceService (the method
that calls _workspace_repository.get_by_name, Workspace.create and
_workspace_repository.save) to catch the DB integrity violation (e.g.
sqlalchemy.exc.IntegrityError) thrown when the UNIQUE constraint is violated,
translate it into DuplicateWorkspaceNameError and call
_probe.workspace_creation_failed with the same tenant_id/name semantics before
re-raising; keep existing DuplicateWorkspaceNameError handling path for
pre-checks so concurrent inserts are handled by the database-level constraint
and surfaced consistently.

Comment on lines 84 to 148
try:
# Check name uniqueness within tenant
existing = await self._workspace_repository.get_by_name(
tenant_id=self._scope_to_tenant,
name=name,
)
if existing:
raise DuplicateWorkspaceNameError(
f"Workspace '{name}' already exists in tenant"
)

# Validate parent workspace exists
parent = await self._workspace_repository.get_by_id(parent_workspace_id)
if not parent:
raise ValueError(
f"Parent workspace {parent_workspace_id.value} does not exist - "
f"the parent workspace must exist before creating a child"
)

# Validate parent belongs to the scoped tenant
if parent.tenant_id != self._scope_to_tenant:
raise ValueError(
"Parent workspace belongs to different tenant - "
"cannot create child workspace across tenant boundaries"
)

# Create workspace using domain factory method
workspace = Workspace.create(
name=name,
tenant_id=self._scope_to_tenant,
parent_workspace_id=parent_workspace_id,
)

# Persist in transaction
async with self._session.begin():
await self._workspace_repository.save(workspace)

# Probe success
self._probe.workspace_created(
workspace_id=workspace.id.value,
tenant_id=self._scope_to_tenant.value,
name=name,
parent_workspace_id=parent_workspace_id.value,
is_root=False,
creator_id=creator_id.value,
)

return workspace

except DuplicateWorkspaceNameError:
self._probe.workspace_creation_failed(
tenant_id=self._scope_to_tenant.value,
name=name,
error=f"Workspace '{name}' already exists in tenant",
)
raise
except ValueError:
raise
except Exception as e:
self._probe.workspace_creation_failed(
tenant_id=self._scope_to_tenant.value,
name=name,
error=str(e),
)
raise
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Find workspace model and schema definitions
fd -e py -e sql | grep -E "(workspace|model|schema|migration)" | head -20

Repository: openshift-hyperfleet/kartograph

Length of output: 1038


🏁 Script executed:

# Search for UniqueConstraint and uniqueness definitions related to workspaces
rg -n "UniqueConstraint|unique.*name|workspaces.*unique" --type py -A 3 -B 1

Repository: openshift-hyperfleet/kartograph

Length of output: 10587


🏁 Script executed:

# Search for IntegrityError handling in the codebase to understand patterns
rg -n "IntegrityError" --type py -B 2 -A 3

Repository: openshift-hyperfleet/kartograph

Length of output: 7199


🏁 Script executed:

# Find the workspace_service.py file and inspect it
fd workspace_service.py -x cat -n {}

Repository: openshift-hyperfleet/kartograph

Length of output: 50389


🏁 Script executed:

# Get the full migration file for workspace creation
find . -name "205809969bf4_create_workspaces_table.py" -exec cat {} \;

Repository: openshift-hyperfleet/kartograph

Length of output: 3013


🏁 Script executed:

# Check workspace model for constraint definitions
sed -n '1,100p' src/api/iam/infrastructure/models/workspace.py

Repository: openshift-hyperfleet/kartograph

Length of output: 2594


🏁 Script executed:

# Search for any unique constraint on workspace name/tenant
rg -n "UniqueConstraint.*name|name.*unique" src/api/iam/infrastructure/models/workspace.py -A 2 -B 2

Repository: openshift-hyperfleet/kartograph

Length of output: 57


🏁 Script executed:

# Search for any IntegrityError handling specifically for workspace name uniqueness
rg -n "workspaces_name|idx_workspaces_name" src/api/ -A 3 -B 1

Repository: openshift-hyperfleet/kartograph

Length of output: 1898


🏁 Script executed:

# Check if there are tests for concurrent workspace creation or race conditions
rg -n "race|concurrent|duplicate.*workspace.*name" src/api/tests/ -i

Repository: openshift-hyperfleet/kartograph

Length of output: 2018


🏁 Script executed:

# Verify workspace repository save implementation
fd workspace_repository.py -x cat -n {} | head -150

Repository: openshift-hyperfleet/kartograph

Length of output: 6666


🏁 Script executed:

# Verify no unique constraint exists on (tenant_id, name)
rg -n "UNIQUE|UniqueConstraint" src/api/iam/infrastructure/models/workspace.py

Repository: openshift-hyperfleet/kartograph

Length of output: 57


🏁 Script executed:

# Check if there are any pending migrations that might add the constraint
fd -e py src/api/infrastructure/migrations/versions/ -x grep -l "workspaces.*UNIQUE\|workspaces.*unique" {}

Repository: openshift-hyperfleet/kartograph

Length of output: 552


Add DB-level uniqueness constraint for workspace names to prevent concurrent race conditions.

The current get_by_name() check is racy—two concurrent requests can both pass the check and insert duplicates since no unique constraint exists on (tenant_id, name). The migration only creates a non-unique index for query performance. Add a UNIQUE constraint on (tenant_id, name) to the database, then handle the resulting IntegrityError to keep error semantics stable.

🛠️ Recommended approach
  1. Create a new migration that adds a UNIQUE constraint on (tenant_id, name)
  2. Update the service to catch and translate IntegrityError:
-            # Persist in transaction
-            async with self._session.begin():
-                await self._workspace_repository.save(workspace)
+            # Persist in transaction
+            try:
+                async with self._session.begin():
+                    await self._workspace_repository.save(workspace)
+            except IntegrityError as e:
+                # Handle unique constraint violation on (tenant_id, name)
+                if "workspaces_tenant_id_name_key" in str(e):
+                    self._probe.workspace_creation_failed(
+                        tenant_id=self._scope_to_tenant.value,
+                        name=name,
+                        error="Workspace name already exists in tenant",
+                    )
+                    raise DuplicateWorkspaceNameError(
+                        f"Workspace '{name}' already exists in tenant"
+                    ) from e
+                raise
🤖 Prompt for AI Agents
In `@src/api/iam/application/services/workspace_service.py` around lines 84 - 148,
Add a DB-level UNIQUE constraint on (tenant_id, name) via a new migration, then
update the create flow in WorkspaceService (the method that calls
_workspace_repository.get_by_name, Workspace.create and
_workspace_repository.save) to catch the DB integrity violation (e.g.
sqlalchemy.exc.IntegrityError) thrown when the UNIQUE constraint is violated,
translate it into DuplicateWorkspaceNameError and call
_probe.workspace_creation_failed with the same tenant_id/name semantics before
re-raising; keep existing DuplicateWorkspaceNameError handling path for
pre-checks so concurrent inserts are handled by the database-level constraint
and surfaced consistently.

…traint for workspace names

Resolve two issues:
1. Fix _get_context_kwargs in DefaultWorkspaceServiceProbe to accept an
   exclude parameter, preventing TypeError when ObservationContext keys
   collide with explicit method parameters (e.g. tenant_id, name).
   All 7 probe methods now filter their explicit parameter names.

2. Add DB-level unique constraint (uq_workspaces_tenant_name) on
   (tenant_id, name) to prevent race conditions in concurrent workspace
   creation. Both create_workspace and create_root_workspace now catch
   IntegrityError and convert to DuplicateWorkspaceNameError.

Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In
`@src/api/infrastructure/migrations/versions/2551292512f6_add_unique_constraint_on_workspace_name_.py`:
- Around line 20-26: In the upgrade() function before calling
op.create_unique_constraint("uq_workspaces_tenant_name", "workspaces",
["tenant_id", "name"]), add a preflight query against the workspaces table to
detect duplicate (tenant_id, name) pairs (e.g., GROUP BY tenant_id, name HAVING
COUNT(*) > 1); if any duplicates are found raise a clear exception
(RuntimeError) that includes the offending tenant_id/name values so the
migration fails fast with an actionable message (alternatively implement an
explicit dedupe step to resolve duplicates before creating the constraint).

Comment on lines +20 to +26
def upgrade() -> None:
"""Add unique constraint on (tenant_id, name) for workspaces."""
op.create_unique_constraint(
"uq_workspaces_tenant_name",
"workspaces",
["tenant_id", "name"],
)
Copy link

@coderabbitai coderabbitai bot Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add a preflight check for existing duplicate (tenant_id, name) pairs.

If duplicates exist, this migration will fail at deploy time with a DB error. Consider failing fast with a clear message (or auto-deduping) before creating the constraint. Line 20–26.

🛠️ Example preflight check
+import sqlalchemy as sa
+
 def upgrade() -> None:
     """Add unique constraint on (tenant_id, name) for workspaces."""
+    conn = op.get_bind()
+    dup = conn.execute(
+        sa.text(
+            """
+            SELECT tenant_id, name, COUNT(*) AS cnt
+            FROM workspaces
+            GROUP BY tenant_id, name
+            HAVING COUNT(*) > 1
+            LIMIT 1
+            """
+        )
+    ).fetchone()
+    if dup:
+        raise RuntimeError(
+            "Duplicate workspaces found for (tenant_id, name); "
+            "resolve before applying uq_workspaces_tenant_name."
+        )
     op.create_unique_constraint(
         "uq_workspaces_tenant_name",
         "workspaces",
         ["tenant_id", "name"],
     )
🤖 Prompt for AI Agents
In
`@src/api/infrastructure/migrations/versions/2551292512f6_add_unique_constraint_on_workspace_name_.py`
around lines 20 - 26, In the upgrade() function before calling
op.create_unique_constraint("uq_workspaces_tenant_name", "workspaces",
["tenant_id", "name"]), add a preflight query against the workspaces table to
detect duplicate (tenant_id, name) pairs (e.g., GROUP BY tenant_id, name HAVING
COUNT(*) > 1); if any duplicates are found raise a clear exception
(RuntimeError) that includes the offending tenant_id/name values so the
migration fails fast with an actionable message (alternatively implement an
explicit dedupe step to resolve duplicates before creating the constraint).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not relevant, no systems are in production that would require this migration step.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jsell-rh, understood! Since there are no production systems with potentially conflicting data, the preflight check isn't necessary. Thanks for clarifying the deployment context.

@jsell-rh jsell-rh enabled auto-merge (squash) February 9, 2026 20:18
@jsell-rh jsell-rh merged commit dc50513 into main Feb 9, 2026
11 checks passed
@jsell-rh jsell-rh deleted the jsell/feat/AIHCM-144-workspace-service branch February 9, 2026 20:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant