-
Notifications
You must be signed in to change notification settings - Fork 8
Push Notifications without Clerk #76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add explicit owner to co.list creation in push-register - Copy pushDevices correctly during migration (map instead of spread) - Add null check for PUBLIC_JAZZ_WORKER_ACCOUNT - Remove PLAN.md
- Add latestReminderDueDate to NotificationSettings schema - Client computes and syncs latest future reminder date on app start - Server keeps refs until 30 days after max(lastSyncedAt, latestReminderDueDate) - Users with far-future reminders stay registered even if inactive
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
@greptile |
Greptile OverviewGreptile SummaryThis PR successfully removes the dependency on Clerk user credentials for server-side push notifications by implementing a group-based access control model where the server worker account is granted write access to user notification settings. Key Changes
Test CoverageComprehensive test coverage added for:
The implementation follows security best practices by ensuring proper access control through Jazz Groups and validating that the server worker can only access settings it has been explicitly granted access to. Confidence Score: 4/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant User as User's Browser
participant Client as Client App
participant Server as Server API
participant Worker as Server Worker Account
participant Settings as NotificationSettings (Group-Owned)
Note over Client,Worker: App Initialization (useRegisterNotifications)
Client->>Client: Check if notificationSettings exists
Client->>Client: Sync language from root to settings
Client->>Client: Compute latestReminderDueDate
Client->>Client: Update latestReminderDueDate field
alt Settings owned by Account (migration needed)
Client->>Client: Create new Group
Client->>Worker: Load server worker account
Client->>Client: Add server worker as writer to Group
Client->>Settings: Create new group-owned NotificationSettings
Client->>Client: Copy data from old to new settings
Client->>Client: Update root reference to new settings
Client->>Client: Delete old account-owned settings
else Settings already group-owned
Client->>Settings: Check if server worker is member
alt Server not a member
Client->>Worker: Load server worker account
Client->>Settings: Add server worker as writer to Group
end
end
Client->>Server: POST /push/register with notificationSettingsId
Server->>Server: Authenticate user
Server->>Worker: Load server worker account
Server->>Settings: Load NotificationSettings (as server worker)
alt Settings not accessible
Server-->>Client: Error 400: ensure server has access
else Settings accessible
Worker->>Worker: Ensure notificationSettingsRefs list exists
Worker->>Worker: Check if ref already exists for this settingsId
alt Ref already exists
Worker->>Worker: Update lastSyncedAt timestamp
else New registration
Worker->>Worker: Create NotificationSettingsRef
Worker->>Worker: Push ref to notificationSettingsRefs list
end
Worker->>Worker: Wait for sync
Server-->>Client: Success: Registered successfully
end
Note over Server,Worker: Cron Job (runs periodically)
Server->>Worker: Load server worker account with refs
Worker->>Server: Return notificationSettingsRefs list
loop For each NotificationSettingsRef
Server->>Settings: Load notification settings
Server->>Server: Check if stale (30 days inactive + no future reminders)
alt Is stale
Server->>Server: Mark ref index for removal
else Not stale
Server->>Server: Check if past notification time
Server->>Server: Check if already delivered today
alt Ready to deliver
Server->>Server: Get enabled devices
loop For each device
Server->>User: Send push notification
end
Server->>Settings: Update lastDeliveredAt timestamp
end
end
end
Server->>Worker: Remove stale refs (batch delete)
Worker->>Worker: Wait for sync
|
📝 WalkthroughWalkthroughClient registers devices with a new server endpoint after subscription; a client-side hook migrates and registers notification settings. Server adds a /push/register API, stores/upserts NotificationSettingsRefs, bulk-processes refs in the push cron with staleness detection, and introduces migration and worker-timeout utilities. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Client as Client App
participant PushAPI as /push/register (Hono)
participant Worker as Server Worker
participant DB as Data Store
participant Cron as Push Cron
User->>Client: Add device & subscribe
Client->>Client: generateAuthToken(me)
Client->>PushAPI: POST /push/register { notificationSettingsId } + Authorization
PushAPI->>Worker: initServerWorker / authenticate
PushAPI->>DB: Load NotificationSettings
alt ref exists
PushAPI->>DB: Update NotificationSettingsRef.lastSyncedAt
else
PushAPI->>DB: Create NotificationSettingsRef (notificationSettings, userId, lastSyncedAt)
end
PushAPI-->>Client: { ok: true } / { ok:false, error }
Note over DB,Cron: Later cron run
Cron->>Worker: Load ServerAccount with notificationSettingsRefs
Cron->>DB: Filter refs (isStaleRef)
loop For each non-stale ref
Cron->>DB: Load NotificationSettings & devices
Cron->>Cron: Localize payload, send push, mark delivered
end
Cron->>DB: Remove stale refs
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@src/app/hooks/use-register-notifications.ts`:
- Around line 135-139: computeLatestReminderDueDate uses new
Date().toISOString().slice(0,10) (UTC) which can mark same-day local reminders
as past; replace that with a timezone-aware local date string like new
Date().toLocaleDateString("sv-SE", { timeZone }) sliced to YYYY-MM-DD (matching
the pattern used in use-sync-reminders-to-sw.ts). In the
computeLatestReminderDueDate function (and where you call
extractReminders/findLatestFutureDate), read the account's timezone (e.g.,
me.timezone) and pass it to toLocaleDateString; if me.timezone is absent, fall
back to the system timezone, then feed that computed "today" into
findLatestFutureDate.
In `@src/server/features/push-register-logic.ts`:
- Around line 55-60: When creating the NotificationSettingsRef via
NotificationSettingsRef.create, pass the explicit owner argument to ensure
consistency with other creations; update the call to
NotificationSettingsRef.create({ notificationSettings, userId, lastSyncedAt: new
Date() }, { owner: worker }) and then push the resulting ref into refs.$jazz
(i.e., keep refs.$jazz.push(newRef)); this mirrors how
co.list(NotificationSettingsRef).create() supplies { owner: worker } and ensures
explicit ownership in server contexts.
In `@src/shared/schema/server.ts`:
- Around line 10-16: ServerAccount.schema needs a post-create/login migration to
backfill the now-required root field; add .withMigration(...) to the
ServerAccount definition that checks if root is missing and creates a root group
via Group.create({ owner: account }) (not by directly passing the account) and
assigns it to account.root, then saves/returns the updated object; reference
ServerAccount, ServerAccountRoot, .withMigration(), and Group.create() when
locating where to add this migration.
🧹 Nitpick comments (4)
src/app/features/notification-settings.tsx (1)
977-988: Consider toast ordering when registration fails.When
triggerNotificationRegistrationfails, users will see both a warning toast (registrationFailed) and a success toast (deviceAdded) in quick succession. This mixed messaging could confuse users about whether the operation succeeded.Consider either:
- Skipping the success toast when registration fails, or
- Using a different success message that clarifies the device was added but server sync is pending
♻️ Possible approach
// Trigger registration with server after adding device if (notifications?.$jazz.id) { let registrationResult = await triggerNotificationRegistration( notifications.$jazz.id, ) if (!registrationResult.ok) { toast.warning(t("notifications.toast.registrationFailed")) + setOpen(false) + form.reset({ deviceName: getDeviceName() }) + return } } toast.success(t("notifications.toast.deviceAdded"))src/server/features/push-cron.ts (2)
122-128: Stale ref removal has O(n²) complexity.Each iteration spreads
refs.values()into a new array and performsfindIndex. For large numbers of stale refs, this becomes inefficient.Consider building an index map once before the loop:
♻️ Proposed optimization
// Remove stale refs by ID (avoids index shifting issues) + // Build index map once to avoid O(n²) lookups + let refIndexMap = new Map( + [...refs.values()].map((r, i) => [r?.$jazz.id, i]) + ) + // Sort descending to splice from end first (avoids index shifts) + let sortedStaleIndices = staleRefIds + .map(id => refIndexMap.get(id)) + .filter((i): i is number => i !== undefined) + .sort((a, b) => b - a) + for (let index of sortedStaleIndices) { + refs.$jazz.splice(index, 1) + } - for (let staleId of staleRefIds) { - let index = [...refs.values()].findIndex(r => r?.$jazz.id === staleId) - if (index > -1) { - refs.$jazz.splice(index, 1) - } - }
155-173: Usingthrowfor control flow is unconventional.The function throws string messages for non-error conditions (not past notification time, already delivered today). While this works with the catch block in the caller, it conflates expected skip conditions with actual errors.
Consider returning a result object instead for clearer semantics:
♻️ Alternative approach
+type ProcessResult = + | { status: 'skipped'; reason: string } + | { status: 'processed'; userId: string; success: boolean } + async function processNotificationRef( ref: LoadedRef, notificationSettings: LoadedNotificationSettings, currentUtc: Date, -): Promise<{ userId: string; success: boolean } | null> { +): Promise<ProcessResult> { let { userId } = ref // Check notification time if (!isPastNotificationTime(notificationSettings, currentUtc)) { - throw `Not past notification time ...` + return { status: 'skipped', reason: `Not past notification time...` } }This would require updating the caller to handle the new return type.
src/server/features/push-register.ts (1)
14-14: Consider adding Jazz ID format validation for the notification settings ID.The
notificationSettingsIdis a Jazz ID (validated asID<typeof NotificationSettings>downstream), but the validator only checks for a non-empty string. Adding a Zod refine to validate Jazz ID format would fail fast on malformed requests rather than deferring validation to the load operation, which provides better UX and clearer error messages.
src/shared/schema/server.ts
Outdated
| export let ServerAccountRoot = co.map({ | ||
| notificationSettingsRefs: co.list(NotificationSettingsRef).optional(), | ||
| }) | ||
|
|
||
| export let ServerAccount = co.account({ | ||
| profile: co.map({ name: z.string() }), | ||
| root: co.map({}), | ||
| root: ServerAccountRoot, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's understand the file structure and see the full context
cat -n src/shared/schema/server.ts | head -30Repository: ccssmnn/tilly
Length of output: 623
🏁 Script executed:
# Search for withMigration usage in the codebase
rg "withMigration" --type tsRepository: ccssmnn/tilly
Length of output: 417
🏁 Script executed:
# Search for ServerAccount and ServerAccountRoot definitions
rg "ServerAccount|ServerAccountRoot" --type ts -A 3Repository: ccssmnn/tilly
Length of output: 5150
🏁 Script executed:
# Check package.json to confirm jazz-tools version
cat package.json | grep -A 2 -B 2 "jazz-tools"Repository: ccssmnn/tilly
Length of output: 176
🏁 Script executed:
# Search for co.account usage patterns in the codebase
rg "co\.account" --type ts -B 2 -A 5Repository: ccssmnn/tilly
Length of output: 337
🌐 Web query:
jazz-tools co.account withMigration API documentation v0.20.4
💡 Result:
For jazz-tools co.account(...).withMigration(...) (incl. the 0.20.x era), the canonical docs are on the “Accounts & Migrations” page: (jazz.tools)
withMigration() (what it is)
co.account({...})creates an Account schema; calling.withMigration(fn)attaches a migration that Jazz runs:- after account creation, and
- every time the user logs in (Jazz waits for it to finish before your app gets the account). (jazz.tools)
Function signature (as documented/illustrated)
Examples show both sync and async migrations:
co.account({ root: MyRoot, profile: MyProfile })
.withMigration((account, creationProps?: { name: string }) => {
// init / backfill fields
});
co.account({ root: MyRoot, profile: MyProfile })
.withMigration(async (account) => {
// async backfill (often via ensureLoaded)
});Common migration patterns shown in docs
- Check whether a field exists via
account.$jazz.has("root")/account.$jazz.has("profile"). - Set initial values via
account.$jazz.set("root", {...}), etc. - For “deep” migrations, load before checking using
account.$jazz.ensureLoaded({ resolve: { root: true } }). (jazz.tools)
Notes
- The docs site doesn’t appear to publish version-pinned API pages for exactly
v0.20.4; the above pages are the current reference forwithMigrationbehavior and usage. (jazz.tools)
Citations:
- 1: https://jazz.tools/docs/svelte/core-concepts/schemas/accounts-and-migrations?utm_source=openai
- 2: https://jazz.tools/docs/svelte/core-concepts/schemas/accounts-and-migrations?utm_source=openai
- 3: https://jazz.tools/docs/svelte/core-concepts/schemas/accounts-and-migrations?utm_source=openai
- 4: https://jazz.tools/docs/svelte/core-concepts/schemas/accounts-and-migrations?utm_source=openai
- 5: https://jazz.tools/docs/svelte/core-concepts/schemas/accounts-and-migrations?utm_source=openai
🌐 Web query:
jazz-tools ServerAccountRoot.create owner pattern migration
💡 Result:
In jazz-tools, the recommended “owner pattern” is to own CoValues with a Group (not an Account), and to initialize/evolve a user’s root / profile via an account migration (.withMigration(...)). CoValues owned by an Account are rigid (can’t add collaborators / transfer ownership) and are kept mainly for backwards compatibility. [1], [2]
Pattern: create root in a migration (and set owners to Groups)
Migrations run after account creation and on every login, and Jazz waits for the migration to finish before handing the account to your app. [1]
import { co, Group, z } from "jazz-tools";
const MyRoot = co.map({
// ...
});
export const MyAccount = co
.account({ root: MyRoot, profile: co.profile() })
.withMigration((account) => {
if (account.root === undefined) {
const owner = Group.create(); // group-owned data (shareable)
account.root = MyRoot.create({
// create nested CoValues owned by `owner` (or their own Groups)
}, owner);
}
});This matches the docs’ approach: check === undefined (meaning “not set yet”) vs null (meaning “not loaded yet”), and initialize missing fields inside the migration. [1]
Pattern: evolve existing accounts (schema changes)
When adding fields later, you typically:
- keep the new field
optional, await account.ensureLoaded({ resolve: { root: true } }), then- if the field is still
undefined, populate it (often creating new Group-owned CoValues). [1]
Owner options when creating new CoValues
You can explicitly pass an owner (a Group or an Account) when creating a CoValue; docs show passing { owner: group } and also recommend reusing an existing CoValue’s ._owner when creating related objects. [2]
Sources:
[1] Accounts & Migrations (Jazz docs) – migrations for root/profile, undefined vs null, ensureLoaded() pattern (jazz.tools)
[2] Groups as permission scopes (Jazz docs) – Group vs Account ownership, { owner: group }, using ._owner (jazz.tools)
Citations:
- 1: https://jazz.tools/docs/vanilla/schemas/accounts-and-migrations?utm_source=openai
- 2: https://jazz.tools/docs/vanilla/groups/intro?utm_source=openai
Add migration to initialize ServerAccount.root for pre-existing accounts.
With root now required, existing server accounts created before this change will fail unless root is initialized. Use .withMigration() to backfill this field:
🛠️ Migration to initialize missing server root
+import { Group } from "jazz-tools"
+
export let ServerAccount = co.account({
profile: co.map({ name: z.string() }),
root: ServerAccountRoot,
-})
+}).withMigration(async (account) => {
+ if (account.root === undefined) {
+ const owner = Group.create()
+ account.root = ServerAccountRoot.create({}, owner)
+ }
+})The migration runs after account creation and on every login, ensuring all accounts have root initialized. Use Group.create() as the owner rather than passing the account directly, as this enables future permission management for the root object.
🤖 Prompt for AI Agents
In `@src/shared/schema/server.ts` around lines 10 - 16, ServerAccount.schema needs
a post-create/login migration to backfill the now-required root field; add
.withMigration(...) to the ServerAccount definition that checks if root is
missing and creates a root group via Group.create({ owner: account }) (not by
directly passing the account) and assigns it to account.root, then saves/returns
the updated object; reference ServerAccount, ServerAccountRoot,
.withMigration(), and Group.create() when locating where to add this migration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@src/server/features/push-cron.ts`:
- Around line 59-66: The code assumes serverAccount.root exists before checking
notificationSettingsRefs, which can throw if the record is missing; update the
logic around initServerWorker/worker.$jazz.ensureLoaded to first confirm
serverAccount and serverAccount.root are defined (e.g., if (!serverAccount ||
!serverAccount.root) { console.log("🔔 Missing server account or root, aborting
cron"); return; }) before accessing serverAccount.root.notificationSettingsRefs
so the cron safely exits or continues without crashing.
In `@src/shared/schema/server.ts`:
- Around line 19-26: Update the migration in withMigration to backfill
notificationSettingsRefs for existing account.root objects: when account.root
exists but account.root.notificationSettingsRefs is undefined, create or update
the ServerAccountRoot for that account (using ServerAccountRoot.create or by
cloning existing root data while setting notificationSettingsRefs: []) and
persist it via account.$jazz.set("root", <updatedRoot>); ensure this runs
alongside the existing branch that creates a root when account.root is undefined
so all roots end up with notificationSettingsRefs initialized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@src/server/features/push-cron.ts`:
- Around line 242-248: The code marks notification settings delivered regardless
of send outcomes; change the flow so
markNotificationSettingsAsDelivered(notificationSettings, currentUtc) is only
invoked when at least one device send succeeded (use deviceResults.some(r =>
r.success) / userSuccess). If all sends fail, do not call
markNotificationSettingsAsDelivered and return a failing status (e.g., { status:
"failed", userId, success: false } or similar) so retries can occur; update the
return logic accordingly to reference userSuccess and avoid suppressing retries.
In `@src/shared/schema/server.ts`:
- Around line 19-35: ensureLoaded is only resolving root, so nested collections
like notificationSettingsRefs may be unloaded and appear undefined; change the
ensureLoaded call in withMigration to explicitly resolve the nested collection
(e.g., ensureLoaded({ resolve: { root: { notificationSettingsRefs: { $each: true
} } } })) and, if the root exists but notificationSettingsRefs is missing, set
only that field instead of recreating the whole ServerAccountRoot (use a
targeted mutation such as updating root.notificationSettingsRefs via
account.$jazz or root.$jazz rather than creating a new ServerAccountRoot and
calling account.$jazz.set("root", ...)); keep the ServerAccountRoot.create path
only for when root is truly undefined.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @.github/workflows/ci.yml:
- Around line 16-18: Replace the non-reproducible "bun-version: latest" usage of
the oven-sh/setup-bun@v2 action by pinning to a concrete Bun version: add a
.bun-version file at repo root containing the target version (e.g. "1.8.0") or
choose a specific version string, then update both occurrences of the workflow
input bun-version (currently "latest") to that exact version string or read it
from .bun-version so CI is deterministic; ensure both places where
oven-sh/setup-bun@v2 is used are changed.
🧹 Nitpick comments (1)
package.json (1)
7-8:buildandbuild:prodare identical.Both scripts run
astro build. If they're intended to differ (e.g., different environment variables or flags for production), consider differentiating them now. Otherwise,build:prodcould be removed to reduce redundancy.
| - uses: oven-sh/setup-bun@v2 | ||
| with: | ||
| bun-version: latest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find any repo-defined Bun version sources
echo "=== Searching for version files ==="
fd -a --hidden -E node_modules -E .git '(\.bun-version|\.tool-versions|package\.json)$' 2>/dev/null | head -20
echo ""
echo "=== Checking for bun version hints ==="
fd -a --hidden -E node_modules -E .git '(\.bun-version|\.tool-versions)$' -exec cat {} \; 2>/dev/null
echo ""
echo "=== Checking package.json for engines.bun ==="
if [ -f package.json ]; then
rg -A 2 '"engines"' package.json 2>/dev/null || echo "No engines field found"
fiRepository: ccssmnn/tilly
Length of output: 234
🏁 Script executed:
#!/bin/bash
# Examine the CI workflow file to see both occurrences
echo "=== CI Workflow file content (lines 1-50) ==="
if [ -f .github/workflows/ci.yml ]; then
cat -n .github/workflows/ci.yml | head -50
fiRepository: ccssmnn/tilly
Length of output: 1186
🏁 Script executed:
#!/bin/bash
echo "=== Searching for bun.lock file ==="
find . -name "bun.lockb" -o -name "bun.lock" 2>/dev/null
echo ""
echo "=== Checking README for Bun version hints ==="
if [ -f README.md ]; then
rg -i 'bun' README.md || echo "No bun references in README"
fiRepository: ccssmnn/tilly
Length of output: 646
Pin Bun to a specific version instead of using latest to keep CI reproducible.
Using latest can break builds when Bun releases new versions. Create a .bun-version file with your target version and update the workflow to use it (or pin directly), then update both occurrences at lines 16-18 and 36-38.
🤖 Prompt for AI Agents
In @.github/workflows/ci.yml around lines 16 - 18, Replace the non-reproducible
"bun-version: latest" usage of the oven-sh/setup-bun@v2 action by pinning to a
concrete Bun version: add a .bun-version file at repo root containing the target
version (e.g. "1.8.0") or choose a specific version string, then update both
occurrences of the workflow input bun-version (currently "latest") to that exact
version string or read it from .bun-version so CI is deterministic; ensure both
places where oven-sh/setup-bun@v2 is used are changed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@src/server/features/push-cron.ts`:
- Around line 84-100: The current loop collects staleRefIds early (using refs,
ref.notificationSettings, isStaleRef and pushing ref.$jazz.id into staleRefIds)
which can race with concurrent updates; instead stop marking for removal here
and either (a) only collect candidates and re-run the staleness check right
before removal, or (b) when processing staleRefIds later, reload the ref and its
notificationSettings and call isStaleRef(ref.lastSyncedAt,
notificationSettings.latestReminderDueDate) again (and ensure
notificationSettings.$isLoaded) before deleting; update the code paths that
consume staleRefIds so they perform this re-verification to avoid deleting refs
that were updated concurrently.
- Around line 159-162: processNotificationRef can return { status: "failed", ...
} but the promise handler that collects results into deliveryResults currently
only pushes when result.status === "processed", causing failed results to be
dropped; update the collector (the .then/.catch that accumulates into
deliveryResults) to also push results with status === "failed" (including userId
and reason) so deliveryResults contains skipped, processed and failed entries
for monitoring, and ensure the ProcessResult union and any downstream consumers
of deliveryResults handle the "failed" shape from processNotificationRef.
src/server/features/push-cron.ts
Outdated
| for (let ref of refs.values()) { | ||
| if (!ref) continue | ||
|
|
||
| let notificationSettings = ref.notificationSettings | ||
| if (!notificationSettings?.$isLoaded) { | ||
| console.log(`❌ User ${ref.userId}: Settings not loaded`) | ||
| continue | ||
| } | ||
|
|
||
| // Check if ref is stale (no app open in 30 days after last sync or latest reminder) | ||
| if ( | ||
| isStaleRef(ref.lastSyncedAt, notificationSettings.latestReminderDueDate) | ||
| ) { | ||
| staleRefIds.push(ref.$jazz.id) | ||
| console.log(`🗑️ Marking stale ref for removal: ${ref.userId}`) | ||
| continue | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential race condition: stale ref IDs collected before processing may become invalid.
Stale ref IDs are collected upfront (lines 94-100) while processing continues concurrently. If any ref's lastSyncedAt is updated during processing (e.g., via a parallel registration), the ref could be incorrectly removed later.
Consider moving stale detection to after all processing completes, or verifying staleness again before removal.
🛡️ Suggested fix: Re-verify staleness before removal
// Remove stale refs by ID (avoids index shifting issues)
let refIndexMap = new Map(
[...refs.values()].map((r, i) => [r?.$jazz.id, i]),
)
-let sortedStaleIndices = staleRefIds
- .map(id => refIndexMap.get(id))
- .filter((i): i is number => i !== undefined)
- .sort((a, b) => b - a)
+let sortedStaleIndices = staleRefIds
+ .map(id => {
+ let index = refIndexMap.get(id)
+ if (index === undefined) return undefined
+ // Re-verify staleness before removal
+ let ref = refs.at(index)
+ if (!ref || !isStaleRef(ref.lastSyncedAt, ref.notificationSettings?.latestReminderDueDate)) {
+ return undefined
+ }
+ return index
+ })
+ .filter((i): i is number => i !== undefined)
+ .sort((a, b) => b - a)🤖 Prompt for AI Agents
In `@src/server/features/push-cron.ts` around lines 84 - 100, The current loop
collects staleRefIds early (using refs, ref.notificationSettings, isStaleRef and
pushing ref.$jazz.id into staleRefIds) which can race with concurrent updates;
instead stop marking for removal here and either (a) only collect candidates and
re-run the staleness check right before removal, or (b) when processing
staleRefIds later, reload the ref and its notificationSettings and call
isStaleRef(ref.lastSyncedAt, notificationSettings.latestReminderDueDate) again
(and ensure notificationSettings.$isLoaded) before deleting; update the code
paths that consume staleRefIds so they perform this re-verification to avoid
deleting refs that were updated concurrently.
| type ProcessResult = | ||
| | { status: "skipped"; reason: string } | ||
| | { status: "processed"; userId: string; success: boolean } | ||
| | { status: "failed"; userId: string; reason: string } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ProcessResult with "failed" status is never added to deliveryResults.
The processNotificationRef function can return { status: "failed", ... } (line 251), but the .then handler (lines 109-112) only pushes results with status === "processed". Failed results are silently dropped from the response.
Consider whether failed results should be included in the response for monitoring purposes.
🔧 Suggested fix to include failed results
let userPromise = processNotificationRef(
ref,
notificationSettings,
currentUtc,
)
.then(result => {
- if (result.status === "processed") {
- deliveryResults.push(result)
+ if (result.status === "processed" || result.status === "failed") {
+ deliveryResults.push({
+ userId: result.userId,
+ success: result.status === "processed" ? result.success : false,
+ })
}
})🤖 Prompt for AI Agents
In `@src/server/features/push-cron.ts` around lines 159 - 162,
processNotificationRef can return { status: "failed", ... } but the promise
handler that collects results into deliveryResults currently only pushes when
result.status === "processed", causing failed results to be dropped; update the
collector (the .then/.catch that accumulates into deliveryResults) to also push
results with status === "failed" (including userId and reason) so
deliveryResults contains skipped, processed and failed entries for monitoring,
and ensure the ProcessResult union and any downstream consumers of
deliveryResults handle the "failed" shape from processNotificationRef.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@src/server/features/push-register.ts`:
- Around line 48-50: The handler currently returns c.json({ message:
result.error }, result.status) when result.ok is false but result.status is
optional, which can accidentally send HTTP 200; update the error return in the
push registration handler that checks result.ok (the code handling
registerNotificationSettingsWithServer and using c.json) to provide a sensible
fallback HTTP status (e.g., result.status ?? 500 or ?? 400), so that when
result.status is undefined the response uses the fallback status instead of 200.
- Line 26: initServerWorker() currently calls startWorker() every time causing a
new worker per request; change it to memoize a singleton worker instance: add a
module-scoped variable (e.g., cachedWorker) and have initServerWorker return
that cached worker if present, otherwise call startWorker(), assign its result
to cachedWorker and return it; ensure any initialization errors reset
cachedWorker so failed attempts can be retried and update call sites that
destructure { worker } to continue working with the cached object.
🧹 Nitpick comments (3)
src/server/features/push-register.ts (1)
8-8: Minor: Export statement precedes the variable declaration.While this is valid JavaScript/TypeScript (due to module scoping), placing the export after the declaration or using inline export is more conventional and improves readability.
♻️ Optional: Move export inline with declaration
-export { pushRegisterApp } - -let pushRegisterApp = new Hono().post( +export let pushRegisterApp = new Hono().post(src/app/hooks/use-register-notifications.ts (2)
38-52: Consider removingmefrom the dependency array.Including
mein the dependency array could cause the effect to re-evaluate frequently as the object reference changes. TheregistrationRan.currentguard prevents duplicate execution, but a cleaner approach would be to depend only onme.$isLoaded:♻️ Suggested refinement
useEffect(() => { if (registrationRan.current || !me.$isLoaded) return if (!me.root.notificationSettings) return registrationRan.current = true registerNotificationSettings(me).catch(error => { console.error("[Notifications] Registration error:", error) registrationRan.current = false }) - }, [me.$isLoaded, me]) + }, [me.$isLoaded])
136-143: Guard against non-JSON error responses.If the server returns a non-JSON error response (e.g., plain text or HTML from a proxy),
registerResult.data.json()will throw. ThetryCatchwrapper handles this, but the error message will be unhelpful. Consider logging the status code for better debugging:♻️ Suggested improvement
if (!registerResult.data.ok) { + let status = registerResult.data.status let errorData = await tryCatch(registerResult.data.json()) console.error( "[Notifications] Registration error:", - errorData.ok ? errorData.data : "Unknown error", + errorData.ok ? errorData.data : `HTTP ${status}`, ) return }
src/server/features/push-register.ts
Outdated
| async c => { | ||
| let { notificationSettingsId } = c.req.valid("json") | ||
|
|
||
| let { worker } = await initServerWorker() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the implementation of initServerWorker function
rg -n -A 20 'function initServerWorker|const initServerWorker|export.*initServerWorker' --glob '*.ts' --glob '*.js'Repository: ccssmnn/tilly
Length of output: 1726
🏁 Script executed:
#!/bin/bash
# Search for the definition more broadly
rg -n 'initServerWorker' --glob '*.ts' --glob '*.js' -B 2 -A 15Repository: ccssmnn/tilly
Length of output: 12087
🏁 Script executed:
#!/bin/bash
# Check file structure to understand the module
git ls-files src/server/features/ | head -20Repository: ccssmnn/tilly
Length of output: 393
🏁 Script executed:
#!/bin/bash
# Search for startWorker implementation and any caching mechanism
rg -n 'startWorker' --glob '*.ts' --glob '*.js' -B 2 -A 20 | head -100Repository: ccssmnn/tilly
Length of output: 5150
🏁 Script executed:
#!/bin/bash
# Check for any module-level caching or singleton patterns
rg -n 'let.*worker|const.*worker' src/server/lib/utils.ts -B 2 -A 2Repository: ccssmnn/tilly
Length of output: 420
Implement caching for initServerWorker() to avoid recreating worker instances on every request.
initServerWorker() currently calls startWorker() unconditionally on every invocation with no caching or reuse mechanism. Since this function is called on every incoming request (in push-register.ts:26, push-cron.ts:59, and chat-messages.ts:57), it will create a new worker instance each time, causing significant performance degradation under load. Implement singleton/memoization to cache and reuse the worker instance across requests.
🤖 Prompt for AI Agents
In `@src/server/features/push-register.ts` at line 26, initServerWorker()
currently calls startWorker() every time causing a new worker per request;
change it to memoize a singleton worker instance: add a module-scoped variable
(e.g., cachedWorker) and have initServerWorker return that cached worker if
present, otherwise call startWorker(), assign its result to cachedWorker and
return it; ensure any initialization errors reset cachedWorker so failed
attempts can be retried and update call sites that destructure { worker } to
continue working with the cached object.
| if (!result.ok) { | ||
| return c.json({ message: result.error }, result.status) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential 200 OK response for error cases when result.status is undefined.
Per the type signature in push-register-logic.ts, status is optional (status?: number). If registerNotificationSettingsWithServer returns { ok: false, error: "..." } without a status, passing undefined to c.json() will default to HTTP 200, masking the error.
🐛 Proposed fix to ensure a fallback status code
if (!result.ok) {
- return c.json({ message: result.error }, result.status)
+ return c.json({ message: result.error }, result.status ?? 500)
}🤖 Prompt for AI Agents
In `@src/server/features/push-register.ts` around lines 48 - 50, The handler
currently returns c.json({ message: result.error }, result.status) when
result.ok is false but result.status is optional, which can accidentally send
HTTP 200; update the error return in the push registration handler that checks
result.ok (the code handling registerNotificationSettingsWithServer and using
c.json) to provide a sensible fallback HTTP status (e.g., result.status ?? 500
or ?? 400), so that when result.status is undefined the response uses the
fallback status instead of 200.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@src/server/features/push-register-logic.ts`:
- Around line 48-64: The current check-then-create on refs (using existingRef,
refs.values(), NotificationSettingsRef.create and refs.$jazz.push) is racy and
can produce duplicate refs under concurrent requests for the same
notificationSettingsId; make registration idempotent by switching to an atomic
upsert or adding a brief critical section: either store refs keyed by
notificationSettingsId (use the notificationSettingsId as the key instead of an
array) or implement a per-ID lock (a small Map of locks) and, if keeping the
array, re-check refs.values().find(...) for existingRef immediately after
acquiring the lock and before calling NotificationSettingsRef.create and
refs.$jazz.push (or dedupe the list after push). Ensure you reference
notificationSettingsId, existingRef, NotificationSettingsRef.create,
refs.$jazz.push and worker when making the change.
- Around line 18-37: After calling worker.$jazz.ensureLoaded(...) add the same
post-load validation used for NotificationSettings: check root.$isLoaded and
return an error response if false; specifically, after the ensureLoaded call for
worker.$jazz (the variable named root), verify root.$isLoaded and return { ok:
false, error: "Failed to load server root", status: 500 } to avoid unsafe access
to root.root.notificationSettingsRefs.
| let notificationSettings = await NotificationSettings.load( | ||
| notificationSettingsId as ID<typeof NotificationSettings>, | ||
| { loadAs: worker }, | ||
| ) | ||
|
|
||
| if (!notificationSettings || !notificationSettings.$isLoaded) { | ||
| return { | ||
| ok: false, | ||
| error: "Failed to load notification settings - ensure server has access", | ||
| status: 400, | ||
| } | ||
| } | ||
|
|
||
| if (!worker.root) { | ||
| return { ok: false, error: "Server root not initialized", status: 500 } | ||
| } | ||
|
|
||
| let root = await worker.$jazz.ensureLoaded({ | ||
| resolve: { root: { notificationSettingsRefs: { $each: true } } }, | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
git ls-files src/server/features/Repository: ccssmnn/tilly
Length of output: 393
🏁 Script executed:
cat -n src/server/features/push-register-logic.tsRepository: ccssmnn/tilly
Length of output: 2565
🏁 Script executed:
rg -A 5 "tryCatch" --type ts | head -50Repository: ccssmnn/tilly
Length of output: 2945
🏁 Script executed:
rg -B 2 -A 3 "\.load\(" --type ts src/ | head -80Repository: ccssmnn/tilly
Length of output: 4395
🏁 Script executed:
rg -B 2 -A 3 "ensureLoaded" --type ts src/ | head -80Repository: ccssmnn/tilly
Length of output: 4289
🏁 Script executed:
rg -B 3 -A 3 "NotificationSettings\.load|ensureLoaded" src/server/features/push-register.test.tsRepository: ccssmnn/tilly
Length of output: 644
🏁 Script executed:
cat -n src/shared/lib/trycatch.tsRepository: ccssmnn/tilly
Length of output: 1003
Add validation for ensureLoaded result to match the pattern used after load().
Line 35–37 calls worker.$jazz.ensureLoaded() without validating the result, but line 18–21 loads NotificationSettings and then checks $isLoaded on line 23. Add a similar $isLoaded check after line 37 to ensure the root is fully loaded before accessing root.root.notificationSettingsRefs on line 39:
let root = await worker.$jazz.ensureLoaded({
resolve: { root: { notificationSettingsRefs: { $each: true } } },
})
if (!root.$isLoaded) {
return { ok: false, error: "Failed to load server root", status: 500 }
}This follows the error-handling pattern already established in the function and avoids unsafe property access.
🤖 Prompt for AI Agents
In `@src/server/features/push-register-logic.ts` around lines 18 - 37, After
calling worker.$jazz.ensureLoaded(...) add the same post-load validation used
for NotificationSettings: check root.$isLoaded and return an error response if
false; specifically, after the ensureLoaded call for worker.$jazz (the variable
named root), verify root.$isLoaded and return { ok: false, error: "Failed to
load server root", status: 500 } to avoid unsafe access to
root.root.notificationSettingsRefs.
| let existingRef = Array.from(refs.values()).find( | ||
| ref => ref?.notificationSettings?.$jazz.id === notificationSettingsId, | ||
| ) | ||
|
|
||
| if (existingRef) { | ||
| existingRef.$jazz.set("lastSyncedAt", new Date()) | ||
| } else { | ||
| let newRef = NotificationSettingsRef.create( | ||
| { | ||
| notificationSettings, | ||
| userId, | ||
| lastSyncedAt: new Date(), | ||
| }, | ||
| { owner: worker }, | ||
| ) | ||
| refs.$jazz.push(newRef) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make ref registration idempotent under concurrent requests.
Line 48-64 uses a check‑then‑create pattern on a shared list. Concurrent registrations for the same notificationSettingsId can both miss existingRef and push duplicates, which could later double-send notifications or bloat the list. Consider adding a uniqueness guard (e.g., a per‑ID lock, or storing refs keyed by notificationSettingsId, or a dedup pass after sync).
🤖 Prompt for AI Agents
In `@src/server/features/push-register-logic.ts` around lines 48 - 64, The current
check-then-create on refs (using existingRef, refs.values(),
NotificationSettingsRef.create and refs.$jazz.push) is racy and can produce
duplicate refs under concurrent requests for the same notificationSettingsId;
make registration idempotent by switching to an atomic upsert or adding a brief
critical section: either store refs keyed by notificationSettingsId (use the
notificationSettingsId as the key instead of an array) or implement a per-ID
lock (a small Map of locks) and, if keeping the array, re-check
refs.values().find(...) for existingRef immediately after acquiring the lock and
before calling NotificationSettingsRef.create and refs.$jazz.push (or dedupe the
list after push). Ensure you reference notificationSettingsId, existingRef,
NotificationSettingsRef.create, refs.$jazz.push and worker when making the
change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@src/server/lib/utils.ts`:
- Around line 27-40: The timeout created for the race (timeoutPromise /
setTimeout producing WorkerTimeoutError) is never cleared, causing timer leaks;
wrap the Promise.race call around startWorker(...) and the timeout in a
try/finally, store the timer id from setTimeout in a variable, and call
clearTimeout(timerId) in the finally block so that whether startWorker resolves
or rejects the timeout is always cleared; update the code referencing
timeoutPromise, WorkerTimeoutError, and startWorker accordingly to use the
try/finally and clearTimeout(timerId).
🧹 Nitpick comments (3)
src/app/lib/notification-registration.ts (1)
31-34: Consider safer error message extraction.The type assertion on line 33 assumes the parsed JSON has a specific shape. If the server returns a different structure, this silently falls back to "Unknown error" which is fine, but a more explicit narrowing could improve maintainability.
♻️ Optional: Explicit type guard
- let errorData = await tryCatch(result.data.json()) - let errorMessage = errorData.ok - ? (errorData.data as { message?: string }).message || "Unknown error" - : "Unknown error" + let errorData = await tryCatch(result.data.json()) + let errorMessage = "Unknown error" + if (errorData.ok && typeof errorData.data === "object" && errorData.data !== null && "message" in errorData.data) { + errorMessage = String((errorData.data as { message: unknown }).message) || "Unknown error" + }src/server/features/push-register.test.ts (1)
103-122: Consider using fake timers for deterministic timestamp comparison.The 10ms
setTimeoutdelay (line 105) for timestamp comparison is fragile and could cause flaky tests on slow CI environments.♻️ Use fake timers
+import { vi } from "vitest" + test("updates lastSyncedAt for existing registration", async () => { // ... setup code ... let firstSyncTime = loadedServer.root.notificationSettingsRefs?.[0]?.lastSyncedAt - await new Promise(r => setTimeout(r, 10)) + vi.useFakeTimers() + vi.advanceTimersByTime(1000) + vi.useRealTimers() await registerNotificationSettingsWithServer( serverAccount, notificationSettings.$jazz.id, "test-user-123", )src/server/features/chat-messages.ts (1)
56-68: Consider avoiding server-worker init when user-worker fails.With
Promise.all,initServerWorker()starts even ifinitUserWorker()times out and you return 504. If timeouts are common, this does unnecessary work. A simple mitigation is to initialize the user worker first, then the server worker, or switch toPromise.allSettledand only keep the server worker when user worker succeeds.
src/server/lib/utils.ts
Outdated
| let timeoutPromise = new Promise<never>((_, reject) => | ||
| setTimeout(() => reject(new WorkerTimeoutError()), 30000), | ||
| ) | ||
|
|
||
| let workerResult = await Promise.race([ | ||
| startWorker({ | ||
| AccountSchema: UserAccount, | ||
| syncServer: PUBLIC_JAZZ_SYNC_SERVER, | ||
| accountID: jazzAccountId, | ||
| accountSecret: jazzAccountSecret, | ||
| skipInboxLoad: true, | ||
| }), | ||
| timeoutPromise, | ||
| ]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, check the actual code at the specified location
cat -n src/server/lib/utils.ts | sed -n '25,45p'Repository: ccssmnn/tilly
Length of output: 724
🏁 Script executed:
# Find the startWorker function definition
rg "export.*startWorker|function startWorker|const startWorker" -A 5Repository: ccssmnn/tilly
Length of output: 39
🏁 Script executed:
# Check if there are existing cleanup patterns with Promise.race and timeouts
rg "clearTimeout|AbortController|AbortSignal" -B 3 -A 3Repository: ccssmnn/tilly
Length of output: 3954
🏁 Script executed:
# Search for where startWorker is imported
rg "import.*startWorker|from.*startWorker" -B 2 -A 2Repository: ccssmnn/tilly
Length of output: 522
🏁 Script executed:
# Check the full file context around the import and usage
head -30 src/server/lib/utils.tsRepository: ccssmnn/tilly
Length of output: 906
🌐 Web query:
jazz-tools startWorker AbortSignal cancellation API documentation
💡 Result:
jazz-tools/worker’s startWorker() does not expose an AbortSignal / cancellation option in its public API (no signal/AbortSignal field in the WorkerOptions type). The documented/typed options are things like accountID, accountSecret, syncServer, WebSocket, AccountSchema, and crypto. [1]
For shutdown/cleanup, startWorker() returns a done(): Promise<void> function you can call to tear down the worker, plus connection helpers like waitForConnection() and subscribeToConnectionChange(). [2] [1]
Sources:
[1] https://cdn.jsdelivr.net/npm/jazz-tools%400.18.5/dist/worker/index.d.ts
[2] https://jazz.tools/docs/vanilla/project-setup/server-side
Clear the timeout when the race resolves to prevent resource leaks.
The timeout timer is never cleared if startWorker succeeds or rejects with a different error. Over time, this accumulates pending timers and wastes resources. Since startWorker (from jazz-tools/worker) does not support AbortSignal cancellation, use a try-finally block to ensure clearTimeout is always called.
🔧 Suggested fix
- let timeoutPromise = new Promise<never>((_, reject) =>
- setTimeout(() => reject(new WorkerTimeoutError()), 30000),
- )
+ let timeoutId: ReturnType<typeof setTimeout>
+ let timeoutPromise = new Promise<never>((_, reject) => {
+ timeoutId = setTimeout(() => reject(new WorkerTimeoutError()), 30000)
+ })
- let workerResult = await Promise.race([
- startWorker({
- AccountSchema: UserAccount,
- syncServer: PUBLIC_JAZZ_SYNC_SERVER,
- accountID: jazzAccountId,
- accountSecret: jazzAccountSecret,
- skipInboxLoad: true,
- }),
- timeoutPromise,
- ])
+ let workerResult: Awaited<ReturnType<typeof startWorker>>
+ try {
+ workerResult = await Promise.race([
+ startWorker({
+ AccountSchema: UserAccount,
+ syncServer: PUBLIC_JAZZ_SYNC_SERVER,
+ accountID: jazzAccountId,
+ accountSecret: jazzAccountSecret,
+ skipInboxLoad: true,
+ }),
+ timeoutPromise,
+ ])
+ } finally {
+ clearTimeout(timeoutId!)
+ }🤖 Prompt for AI Agents
In `@src/server/lib/utils.ts` around lines 27 - 40, The timeout created for the
race (timeoutPromise / setTimeout producing WorkerTimeoutError) is never
cleared, causing timer leaks; wrap the Promise.race call around startWorker(...)
and the timeout in a try/finally, store the timer id from setTimeout in a
variable, and call clearTimeout(timerId) in the finally block so that whether
startWorker resolves or rejects the timeout is always cleared; update the code
referencing timeoutPromise, WorkerTimeoutError, and startWorker accordingly to
use the try/finally and clearTimeout(timerId).
This PR extends on the mission to remove the need to access user secrets on the server by adding the server worker account as a writer to notification settings and pinging the server worker to add a users settings covalue to its account.
This way we just need to sign into the server worker and iterate through the available notification settings.
Currently we sign into a worker using the user credentials to do so.
Summary by CodeRabbit
New Features
Improvements
Tests
Chores