Conversation
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
|
Considering that this is intended to be a part of the REST API, I think it would make sense to use |
packages/job-worker/src/playout/model/implementation/PlayoutRundownModelImpl.ts
Outdated
Show resolved
Hide resolved
packages/job-worker/src/playout/model/implementation/PlayoutSegmentModelImpl.ts
Outdated
Show resolved
Hide resolved
4e48f4a to
e3ed010
Compare
Lists internal playlistId and also lists all Rundowns with their externalIds
Moves example to the referenced item for better safety when updating in the future
e3ed010 to
dd9eb5b
Compare
WalkthroughThis change introduces a comprehensive REST API for ingest operations, enabling CRUD management of playlists, rundowns, segments, and parts. It adds server-side implementation with validation, type definitions, OpenAPI documentation, blueprint integration points, and supporting infrastructure for REST API-based ingest sources. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant APIRouter as API Router
participant IngestServerAPI
participant IngestOp as Ingest Operation
participant Database as DB
Client->>APIRouter: PUT /ingest/{studioId}/playlists/{playlistId}/rundowns/{rundownId}
APIRouter->>IngestServerAPI: putRundown(studioId, playlistId, rundownId, payload)
IngestServerAPI->>IngestServerAPI: Validate payload against blueprint
IngestServerAPI->>Database: Find playlist by id/externalId
alt Playlist not found
IngestServerAPI-->>Client: 404 Not Found
else Playlist found
IngestServerAPI->>Database: Find rundown by id/externalId
alt Rundown not found
IngestServerAPI-->>Client: 404 Not Found
else Rundown found
IngestServerAPI->>IngestOp: runIngestOperation(UpdateRundown)
IngestOp->>Database: Update rundown with new payload
IngestOp-->>IngestServerAPI: Operation complete
IngestServerAPI-->>Client: 200 Updated rundown response
end
end
sequenceDiagram
participant Scheduler
participant IngestActions
participant ExternalAPI as External REST API
participant Database as DB
Scheduler->>IngestActions: reloadRundown(rundown with restApi source)
alt Source is restApi
IngestActions->>ExternalAPI: POST to rundown.resyncUrl
alt Request succeeds
ExternalAPI-->>IngestActions: 200 Response
IngestActions->>Database: Log success
IngestActions-->>Scheduler: WORKING
else Connection error (ECONNREFUSED/ENOTFOUND)
IngestActions->>Database: Log connection error
IngestActions-->>Scheduler: WORKING (no throw)
else Other error
IngestActions->>Database: Log error details
IngestActions-->>Scheduler: WORKING (no throw)
end
else Source is other type
IngestActions->>IngestActions: Use existing control flow
IngestActions-->>Scheduler: Result
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 15
🤖 Fix all issues with AI agents
In `@meteor/server/api/ingest/actions.ts`:
- Around line 37-46: The catch block in meteor/server/api/ingest/actions.ts
dereferences error.cause.code directly (inside the promise .catch) which can be
undefined; update the handler in the resync fetch .catch to safely check for the
presence of error.cause and its code (e.g., use optional chaining
error?.cause?.code or typeof checks) before comparing to
'ECONNREFUSED'/'ENOTFOUND', and fall back to logging a generic/rescue message
including JSON.stringify(error) or error.message when cause/code is absent so
the logger calls in that catch (the two logger.error lines) never throw due to a
missing cause.
In `@meteor/server/api/rest/v1/ingest.ts`:
- Around line 1040-1042: Fix the typo in the error message thrown when a part is
missing: update the string in the throw new Meteor.Error call that references
partId (the block checking existingPart) to use "does not exist" instead of
"does not exists" so the message reads `Part '<partId>' does not exist`.
- Around line 731-736: The loop in putSegments calls this.findSegment(...) which
throws when the segment is not found, making the subsequent if (!segment) check
unreachable; replace this.findSegment with this.softFindSegment so the function
returns null on missing segments and the existing null-check/early-return
remains valid. Locate the ingestSegments.map async callback (the block using
findSegment) and swap the call to use softFindSegment while leaving the rest of
the logic and null-handling intact.
- Around line 994-1008: The code in putParts uses this.findPart which throws if
a part is missing, making the subsequent if (!existingPart) branch unreachable;
replace the lookup call with this.softFindPart so missing parts return
null/undefined and the guard works as intended (locate the call to this.findPart
inside the ingestParts.map callback and swap to this.softFindPart, leaving the
surrounding logic and the runIngestOperation call unchanged).
- Around line 363-372: The code currently fetches all rundowns with
Rundowns.findFetchAsync({}) which causes runIngestOperation(RemoveRundown) to be
scheduled for every rundown across all studios; change the query to only fetch
rundowns for the current studio by using the studio id returned from
this.findStudio(studioId) (e.g. Rundowns.findFetchAsync({ studioId: studio._id
}) or the appropriate studioId field name on Rundown), then map those results to
runIngestOperation(studio._id, IngestJobs.RemoveRundown, { rundownExternalId:
rundown.externalId }) as before so only rundowns belonging to the specified
studio are deleted.
- Around line 511-530: The code in putRundowns currently calls findRundown which
throws 404 for missing rundowns, making the if (!existingRundown) branch
unreachable; to implement the intended "skip missing rundowns" behavior, replace
the call to findRundown with softFindRundown (or the existing soft-find helper)
so it returns null/undefined for non-existent rundowns, keep the if
(!existingRundown) return to skip them, and ensure softFindRundown is
imported/used consistently alongside checkRundownSource and runIngestOperation
to update only existing rundowns.
- Around line 159-179: The adaptRundown and adaptSegment functions currently
omit timing data required by the response schema; update adaptRundown to include
timing: map rawRundown.timing (RundownPlaylistTiming) into the returned
RundownResponse.timing field, and update adaptSegment to include timing: map
rawSegment.segmentTiming (SegmentTimingInfo) into SegmentResponse.timing
(ensuring the property names match the OpenAPI schema such as
timing.expectedStart/expectedEnd and timing.type). Locate the functions
adaptRundown and adaptSegment and add the timing property to their returned
objects using rawRundown.timing and rawSegment.segmentTiming respectively.
In `@meteor/server/api/rest/v1/playlists.ts`:
- Around line 53-83: The findSegment and findPart helpers currently throw
Meteor.Error(404) and perform global lookups; change them to return a 412
PartNotFound using UserError.from(UserErrorMessage.PartNotFound, 412) (or the
equivalent UserError construction) when not found, and tighten their DB queries
to restrict by the playlist's associated rundown IDs (so the $or: [{_id},
{externalId}] lookup also includes a { rundownId: <rundownId> } constraint or an
$in of rundown IDs for the playlist). Update callsites (setNextSegment,
setNextPart, queueNextSegment) to rely on the new semantics from
findSegment/findPart so the REST layer maps to PartNotFound/412 as documented.
In `@meteor/server/lib/rest/v1/ingest.ts`:
- Around line 255-262: The SegmentResponse type is missing the timing property
required by the OpenAPI spec and tests; update the SegmentResponse type to
include a timing object with expectedStart and expectedEnd (both appropriately
typed, e.g. strings or Date/ISO timestamps per project convention) so code and
tests referencing timing.expectedStart and timing.expectedEnd compile and match
the API schema; locate the SegmentResponse type declaration and add the timing
field accordingly.
- Around line 246-253: The RundownResponse type is missing the type and timing
fields required by the OpenAPI schema; update the exported type RundownResponse
to include a type field (e.g., type: string or the existing RundownType enum)
and a timing field (e.g., timing: RundownTiming or the appropriate timing type
used elsewhere), and ensure you reference/import the correct
RundownType/RundownTiming symbols used in the codebase so the TS type matches
the ingest.yaml schema and the tests that expect these properties.
In `@packages/openapi/api/definitions/ingest.yaml`:
- Around line 978-996: The schema lists "notFound" in the required array but
doesn't define it in properties for the response schemas rundownNotFound,
segmentNotFound, and partNotFound; remove "notFound" from each required list
(leave only "status" and "message") or alternatively add a corresponding
"notFound" property definition under properties (e.g., type: boolean or string)
so required matches properties—update the YAML for rundownNotFound,
segmentNotFound, and partNotFound accordingly.
- Around line 860-869: The PUT "part" operation's path parameters named
segmentId and partId are missing required: true; update the parameter objects
for segmentId and partId in the PUT part operation (the path parameters shown in
the diff) to include required: true so they are treated as mandatory path
parameters, matching the style of other endpoints.
- Around line 1172-1176: The OpenAPI schema for the numeric property "rank" uses
the invalid keyword "inclusiveMinimum"; update the schema by replacing
"inclusiveMinimum: 0.0" with the valid OpenAPI keyword "minimum: 0.0" (retain
type: number, description, and example), and if an exclusive bound is ever
needed use "exclusiveMinimum: true" alongside "minimum" instead of
"inclusiveMinimum".
In `@packages/openapi/src/__tests__/ingest.spec.ts`:
- Around line 56-67: The two tests depend on the same playlistIds array causing
a race where deletePlaylists removes items and makes playlistIds[0] invalid for
deletePlaylist; update tests so they don't share mutated state: either run the
'Can delete playlist by id' test before calling ingestApi.deletePlaylists, or
better, create a fresh playlist (or repopulate playlistIds) inside the 'Can
delete playlist by id' test and then call ingestApi.deletePlaylist with that new
id; refer to the test names and the ingestApi.deletePlaylists /
ingestApi.deletePlaylist calls and the playlistIds array when making the change.
- Around line 403-404: The 'Can update a part' test uses newIngestPart.name but
newIngestPart is only set in the 'Can request part by id' test, so add a safety
check: assert newIngestPart is defined (or throw a clear error) before mutating
it in the 'Can update a part' test, or skip/mark the test as dependent if unset;
specifically, modify the 'Can update a part' test to verify newIngestPart is not
undefined/null (e.g., with a guard or expect(newIngestPart).toBeDefined())
before doing newIngestPart.name = ... to avoid a TypeError when newIngestPart is
missing.
🧹 Nitpick comments (7)
meteor/server/api/rest/v1/typeConversion.ts (2)
759-808: Consider consolidating duplicated validation logic.The three validation functions (
validateAPIRundownPayload,validateAPISegmentPayload,validateAPIPartPayload) share nearly identical structure. A generic helper could reduce duplication.♻️ Example consolidated approach
type PayloadValidatorKey = 'validateRundownPayloadFromAPI' | 'validateSegmentPayloadFromAPI' | 'validatePartPayloadFromAPI' async function validateAPIPayload( blueprintId: BlueprintId | undefined, payload: unknown, validatorKey: PayloadValidatorKey, payloadType: string ): Promise<string[]> { const blueprint = await getBlueprint(blueprintId, BlueprintManifestType.STUDIO) const blueprintManifest = evalBlueprint(blueprint) as StudioBlueprintManifest const validator = blueprintManifest[validatorKey] if (typeof validator !== 'function') { logger.info(`Blueprint ${blueprintManifest.blueprintId} does not support ${payloadType} payload validation`) return [] } const blueprintContext = new CommonContext(`validateAPI${payloadType}Payload`, `blueprint:${blueprint._id}`) return validator(blueprintContext, payload) } export const validateAPIRundownPayload = (id: BlueprintId | undefined, p: unknown) => validateAPIPayload(id, p, 'validateRundownPayloadFromAPI', 'Rundown') export const validateAPISegmentPayload = (id: BlueprintId | undefined, p: unknown) => validateAPIPayload(id, p, 'validateSegmentPayloadFromAPI', 'Segment') export const validateAPIPartPayload = (id: BlueprintId | undefined, p: unknown) => validateAPIPayload(id, p, 'validatePartPayloadFromAPI', 'Part')
759-762: Return type includesundefinedbut that path is unreachable.The function signature declares
Promise<string[] | undefined>, butgetBlueprintthrows aMeteor.Error(404)when the blueprint is not found. Theundefinedreturn case is never reached. Consider simplifying toPromise<string[]>for accuracy, or handle the missing blueprint case by returningundefinedinstead of throwing.♻️ Option: Simplify return type
export async function validateAPIRundownPayload( blueprintId: BlueprintId | undefined, rundownPayload: unknown -): Promise<string[] | undefined> { +): Promise<string[]> {Apply the same change to
validateAPISegmentPayloadandvalidateAPIPartPayload.meteor/server/api/ingest/actions.ts (1)
33-33: Add a timeout to the fetch request to prevent hanging on unresponsive endpoints.The
fetchcall at line 33 has no timeout configured. If the external resync endpoint is slow or unresponsive, the request could hang indefinitely, potentially leading to resource exhaustion under repeated reload attempts.⏱️ Proposed fix to add a timeout
- fetch(resyncUrl, { method: 'POST' }) + fetch(resyncUrl, { method: 'POST', signal: AbortSignal.timeout(30000) })Node.js v18+ supports
AbortSignal.timeout()natively, and this will abort the request after 30 seconds, preventing indefinite hangs during slow or unresponsive responses.meteor/server/lib/rest/v1/ingest.ts (1)
235-237: Consider makingresyncUrloptional inRestApiIngestRundown.The
resyncUrlfield is marked as required here, but the PR description mentions the resync mechanism is optional ("Sofie can POST to an external resync URL"). If resync is not always needed, making this optional would provide more flexibility.💡 Suggested change (if resync is optional)
export type RestApiIngestRundown = Omit<IngestRundown, 'playlistExternalId'> & { - resyncUrl: string + resyncUrl?: string }meteor/server/api/rest/v1/ingest.ts (2)
228-239: Unnecessary$orwrapper for single query condition.The
$orarray contains only one condition, making it redundant. Same pattern appears infindSegments(lines 265-274) andfindParts(lines 297-302).♻️ Simplified queries
private async findRundowns(studioId: StudioId, playlistId: RundownPlaylistId) { const rundowns = await Rundowns.findFetchAsync({ - $or: [ - { - playlistId, - studioId, - }, - ], + playlistId, + studioId, }) return rundowns }
130-140: Remove unnecessary return statements inforEachcallbacks.Static analysis flagged that
forEachcallbacks should not return values. Thereturnfromthis.validateSegment()andthis.validatePart()is ignored.♻️ Remove returns
- ingestRundown.segments.forEach((ingestSegment) => this.validateSegment(ingestSegment)) + ingestRundown.segments.forEach((ingestSegment) => { this.validateSegment(ingestSegment) }) } private validateSegment(ingestSegment: IngestSegment) { check(ingestSegment, Object) check(ingestSegment.externalId, String) check(ingestSegment.name, String) check(ingestSegment.rank, Number) check(ingestSegment.parts, Array) - ingestSegment.parts.forEach((ingestPart) => this.validatePart(ingestPart)) + ingestSegment.parts.forEach((ingestPart) => { this.validatePart(ingestPart) }) }meteor/server/api/rest/v1/playlists.ts (1)
564-574: Remove unreachable not‑found branch afterfindPlaylist.
findPlaylistalready throws a 404, so theif (!playlist)block is dead code. Removing it simplifies the flow.🧹 Suggested cleanup
const playlist = await this.findPlaylist(rundownPlaylistId) - if (!playlist) - return ClientAPI.responseError( - UserError.from( - Error(`Rundown playlist ${rundownPlaylistId} does not exist`), - UserErrorMessage.RundownPlaylistNotFound, - undefined, - 412 - ) - ) if (!playlist.currentPartInfo?.partInstanceId || !playlist.activationId) return ClientAPI.responseError( UserError.from( new Error(`Rundown playlist ${rundownPlaylistId} is not currently active`), UserErrorMessage.InactiveRundown,
| .catch((error) => { | ||
| if (error.cause.code === 'ECONNREFUSED' || error.cause.code === 'ENOTFOUND') { | ||
| logger.error( | ||
| `Reload rundown: could not establish connection with "${resyncUrl}" (${error.cause.code})` | ||
| ) | ||
| return | ||
| } | ||
| logger.error( | ||
| `Reload rundown: error occured while sending resync request to "${resyncUrl}", message: ${error.message}, cause: ${JSON.stringify(error.cause)}` | ||
| ) |
There was a problem hiding this comment.
Potential null/undefined dereference on error.cause.
The fetch rejection may not always include a cause property with a code field. Accessing error.cause.code without a guard can throw a TypeError if cause is undefined (e.g., for network errors in some environments or non-Node fetch implementations).
🐛 Proposed fix to safely access error.cause
.catch((error) => {
- if (error.cause.code === 'ECONNREFUSED' || error.cause.code === 'ENOTFOUND') {
+ const causeCode = error.cause?.code
+ if (causeCode === 'ECONNREFUSED' || causeCode === 'ENOTFOUND') {
logger.error(
- `Reload rundown: could not establish connection with "${resyncUrl}" (${error.cause.code})`
+ `Reload rundown: could not establish connection with "${resyncUrl}" (${causeCode})`
)
return
}🤖 Prompt for AI Agents
In `@meteor/server/api/ingest/actions.ts` around lines 37 - 46, The catch block in
meteor/server/api/ingest/actions.ts dereferences error.cause.code directly
(inside the promise .catch) which can be undefined; update the handler in the
resync fetch .catch to safely check for the presence of error.cause and its code
(e.g., use optional chaining error?.cause?.code or typeof checks) before
comparing to 'ECONNREFUSED'/'ENOTFOUND', and fall back to logging a
generic/rescue message including JSON.stringify(error) or error.message when
cause/code is absent so the logger calls in that catch (the two logger.error
lines) never throw due to a missing cause.
| private adaptRundown(rawRundown: Rundown): RundownResponse { | ||
| return { | ||
| id: unprotectString(rawRundown._id), | ||
| externalId: rawRundown.externalId, | ||
| playlistId: unprotectString(rawRundown.playlistId), | ||
| playlistExternalId: rawRundown.playlistExternalId, | ||
| studioId: unprotectString(rawRundown.studioId), | ||
| name: rawRundown.name, | ||
| } | ||
| } | ||
|
|
||
| private adaptSegment(rawSegment: DBSegment): SegmentResponse { | ||
| return { | ||
| id: unprotectString(rawSegment._id), | ||
| externalId: rawSegment.externalId, | ||
| name: rawSegment.name, | ||
| rank: rawSegment._rank, | ||
| rundownId: unprotectString(rawSegment.rundownId), | ||
| isHidden: rawSegment.isHidden, | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd /repo
# First, find the files mentioned in the review
find . -type f -name "ingest.ts" | grep -E "(meteor|server)" | head -5Repository: Sofie-Automation/sofie-core
Length of output: 219
🏁 Script executed:
# Search for Rundown and Segment type definitions
rg -n "interface.*Rundown|type.*Rundown" --type ts | head -20Repository: Sofie-Automation/sofie-core
Length of output: 2463
🏁 Script executed:
# Search for timing-related fields in model files
rg -n "timing|expectedStart|expectedEnd|expectedDuration" --type ts | grep -i "rundown\|segment" | head -30Repository: Sofie-Automation/sofie-core
Length of output: 4590
🏁 Script executed:
# Read the ingest.ts file to see the adapter methods
cat -n meteor/server/api/rest/v1/ingest.ts | sed -n '150,190p'Repository: Sofie-Automation/sofie-core
Length of output: 1669
🏁 Script executed:
# Find the DBRundown and DBSegment type definitions
rg -n "interface.*DBRundown|interface.*DBSegment" --type ts -A 20 | head -100Repository: Sofie-Automation/sofie-core
Length of output: 8493
🏁 Script executed:
# Find the test file that's mentioned (should reference timing at lines 90-100 and 192-194)
fd -e "\.test\." -o -e "\.spec\." | xargs rg -l "adaptRundown\|adaptSegment" 2>/dev/null | head -5Repository: Sofie-Automation/sofie-core
Length of output: 187
🏁 Script executed:
# Search for test files related to ingest
rg -l "adaptRundown|adaptSegment" --type ts | grep -i testRepository: Sofie-Automation/sofie-core
Length of output: 53
🏁 Script executed:
# Find the Rundown and DBSegment model definitions
find . -type f -name "*.ts" | xargs rg "interface.*Rundown\s*{" -A 30 | grep -E "(^.*\.ts|timing|expectedStart)" | head -50Repository: Sofie-Automation/sofie-core
Length of output: 178
🏁 Script executed:
# Look for the response types in the ingest.ts file
rg -n "RundownResponse|SegmentResponse" meteor/server/api/rest/v1/ingest.ts -B 5 -A 10Repository: Sofie-Automation/sofie-core
Length of output: 6437
🏁 Script executed:
# Find where RundownResponse and SegmentResponse are defined
rg -n "interface RundownResponse|interface SegmentResponse|type RundownResponse|type SegmentResponse" --type ts -A 15Repository: Sofie-Automation/sofie-core
Length of output: 1492
🏁 Script executed:
# Search in the lib/rest folder since the imports show they come from there
find . -path "*lib/rest*" -name "ingest.ts" -type fRepository: Sofie-Automation/sofie-core
Length of output: 108
🏁 Script executed:
# Check the shared lib version
rg -n "RundownResponse|SegmentResponse" packages/shared-lib -A 5 -B 2Repository: Sofie-Automation/sofie-core
Length of output: 53
🏁 Script executed:
# Search for test files that test the ingest API
find . -type f \( -name "*.test.ts" -o -name "*.spec.ts" \) | xargs rg -l "RundownResponse\|SegmentResponse" 2>/dev/nullRepository: Sofie-Automation/sofie-core
Length of output: 55
🏁 Script executed:
# Look for Rundown and DBSegment type definitions in the codebase
find . -path "*/lib*" -type f -name "*.ts" | xargs rg "interface Rundown\s*\{|interface DBSegment\s*\{" -A 30 | head -150Repository: Sofie-Automation/sofie-core
Length of output: 53
🏁 Script executed:
# Search more broadly for Rundown interface
rg "interface Rundown " --type ts -A 20 | head -80Repository: Sofie-Automation/sofie-core
Length of output: 1586
🏁 Script executed:
# Get the complete Rundown interface
cat packages/corelib/src/dataModel/Rundown.ts | grep -A 100 "^export interface Rundown"Repository: Sofie-Automation/sofie-core
Length of output: 3676
🏁 Script executed:
# Search for DBSegment interface
rg "interface DBSegment" --type ts -A 30Repository: Sofie-Automation/sofie-core
Length of output: 2418
🏁 Script executed:
# Search for timing fields in the Rundown model file
rg "timing" packages/corelib/src/dataModel/Rundown.ts -B 2 -A 2Repository: Sofie-Automation/sofie-core
Length of output: 226
🏁 Script executed:
# Find test files for the ingest API
find . -type f \( -name "*.test.ts" -o -name "*.spec.ts" \) | xargs rg -l "ingest" 2>/dev/null | head -10Repository: Sofie-Automation/sofie-core
Length of output: 821
🏁 Script executed:
# Look at the RestApiIngestRundown type which is mentioned in the imports
rg -n "RestApiIngestRundown" --type ts -A 20 -B 5Repository: Sofie-Automation/sofie-core
Length of output: 19327
🏁 Script executed:
# Check RundownPlaylistTiming type
rg -n "RundownPlaylistTiming|SegmentTimingInfo" --type ts -A 10 | head -80Repository: Sofie-Automation/sofie-core
Length of output: 8586
🏁 Script executed:
# Read the OpenAPI test file to check the expectations for timing fields
cat packages/openapi/src/__tests__/ingest.spec.ts | head -250Repository: Sofie-Automation/sofie-core
Length of output: 7832
🏁 Script executed:
# Check specifically around lines 90-100 and 192-194
sed -n '85,105p' packages/openapi/src/__tests__/ingest.spec.ts
sed -n '185,200p' packages/openapi/src/__tests__/ingest.spec.tsRepository: Sofie-Automation/sofie-core
Length of output: 1575
🏁 Script executed:
# Look for the OpenAPI spec file that defines RundownResponse
find . -path "*/openapi*" -name "*.ts" -o -name "*.yaml" -o -name "*.json" | head -20Repository: Sofie-Automation/sofie-core
Length of output: 704
🏁 Script executed:
# Find the generated OpenAPI client types
find . -path "*/openapi*" -name "*.ts" | xargs rg -l "Rundown\|Segment" | head -5Repository: Sofie-Automation/sofie-core
Length of output: 53
🏁 Script executed:
# Look for the OpenAPI generated types
find . -path "*/client/ts*" -name "*.ts" | head -20Repository: Sofie-Automation/sofie-core
Length of output: 53
🏁 Script executed:
# Check what the test is actually importing - find Rundown and Segment types from generated API
rg -n "import.*Rundown\|import.*Segment" packages/openapi/src/__tests__/ingest.spec.ts -A 5 -B 5Repository: Sofie-Automation/sofie-core
Length of output: 53
🏁 Script executed:
# Find the generated OpenAPI client
find . -type d -name "client" | grep openapiRepository: Sofie-Automation/sofie-core
Length of output: 53
🏁 Script executed:
# Check the OpenAPI package structure
ls -la packages/openapi/src/Repository: Sofie-Automation/sofie-core
Length of output: 368
🏁 Script executed:
# Look at what the test file actually expects by checking the full imports
head -30 packages/openapi/src/__tests__/ingest.spec.tsRepository: Sofie-Automation/sofie-core
Length of output: 981
🏁 Script executed:
# Find where the generated client types come from
find packages/openapi -name "*.ts" ! -path "*/__tests__/*" | head -20Repository: Sofie-Automation/sofie-core
Length of output: 168
🏁 Script executed:
# Look at the OpenAPI spec file
find . -name "openapi.yaml" -o -name "openapi.json" -o -name "swagger.yaml" -o -name "swagger.json" | head -5Repository: Sofie-Automation/sofie-core
Length of output: 53
🏁 Script executed:
# Check if there's a schema or spec definition
find packages/openapi -type f \( -name "*.yaml" -o -name "*.json" \) | head -10Repository: Sofie-Automation/sofie-core
Length of output: 525
🏁 Script executed:
# Check the ingest.yaml OpenAPI definition
cat packages/openapi/api/definitions/ingest.yaml | grep -A 50 "RundownResponse\|SegmentResponse"Repository: Sofie-Automation/sofie-core
Length of output: 53
🏁 Script executed:
# Get the full ingest.yaml to see all schema definitions
cat packages/openapi/api/definitions/ingest.yamlRepository: Sofie-Automation/sofie-core
Length of output: 38359
🏁 Script executed:
# Confirm what fields the Rundown model actually has for timing
rg -A 5 "timing: RundownPlaylistTiming" packages/corelib/src/dataModel/Rundown.tsRepository: Sofie-Automation/sofie-core
Length of output: 300
🏁 Script executed:
# Confirm what fields the Segment model has for timing
rg -A 5 "segmentTiming" packages/corelib/src/dataModel/Segment.tsRepository: Sofie-Automation/sofie-core
Length of output: 267
Add timing field to adaptRundown and adaptSegment return values.
The test file expects timing on rundown responses (line 90-100: rundown.timing.type) and segment responses (lines 192-194: segment.timing.expectedStart, segment.timing.expectedEnd), and the OpenAPI spec defines these fields as required components of the response schema. However, both adapter methods omit the timing information available from the database models.
Extract timing from rawRundown.timing (RundownPlaylistTiming) for rundown responses and segmentTiming from rawSegment.segmentTiming (SegmentTimingInfo) for segment responses, then include them in the respective return objects.
🤖 Prompt for AI Agents
In `@meteor/server/api/rest/v1/ingest.ts` around lines 159 - 179, The adaptRundown
and adaptSegment functions currently omit timing data required by the response
schema; update adaptRundown to include timing: map rawRundown.timing
(RundownPlaylistTiming) into the returned RundownResponse.timing field, and
update adaptSegment to include timing: map rawSegment.segmentTiming
(SegmentTimingInfo) into SegmentResponse.timing (ensuring the property names
match the OpenAPI schema such as timing.expectedStart/expectedEnd and
timing.type). Locate the functions adaptRundown and adaptSegment and add the
timing property to their returned objects using rawRundown.timing and
rawSegment.segmentTiming respectively.
| const rundowns = await Rundowns.findFetchAsync({}) | ||
| const studio = await this.findStudio(studioId) | ||
|
|
||
| await Promise.all( | ||
| rundowns.map(async (rundown) => | ||
| runIngestOperation(studio._id, IngestJobs.RemoveRundown, { | ||
| rundownExternalId: rundown.externalId, | ||
| }) | ||
| ) | ||
| ) |
There was a problem hiding this comment.
Critical: deletePlaylists deletes ALL rundowns, not just those for the specified studio.
The query Rundowns.findFetchAsync({}) fetches all rundowns in the database regardless of studioId. This will delete rundowns belonging to other studios.
🐛 Proposed fix
async deletePlaylists(
_connection: Meteor.Connection,
_event: string,
studioId: StudioId
): Promise<ClientAPI.ClientResponse<undefined>> {
check(studioId, String)
- const rundowns = await Rundowns.findFetchAsync({})
const studio = await this.findStudio(studioId)
+ const rundowns = await Rundowns.findFetchAsync({ studioId: studio._id })
await Promise.all(
rundowns.map(async (rundown) =>
runIngestOperation(studio._id, IngestJobs.RemoveRundown, {
rundownExternalId: rundown.externalId,
})
)
)
return ClientAPI.responseSuccess(undefined)
}🤖 Prompt for AI Agents
In `@meteor/server/api/rest/v1/ingest.ts` around lines 363 - 372, The code
currently fetches all rundowns with Rundowns.findFetchAsync({}) which causes
runIngestOperation(RemoveRundown) to be scheduled for every rundown across all
studios; change the query to only fetch rundowns for the current studio by using
the studio id returned from this.findStudio(studioId) (e.g.
Rundowns.findFetchAsync({ studioId: studio._id }) or the appropriate studioId
field name on Rundown), then map those results to runIngestOperation(studio._id,
IngestJobs.RemoveRundown, { rundownExternalId: rundown.externalId }) as before
so only rundowns belonging to the specified studio are deleted.
| await Promise.all( | ||
| ingestRundowns.map(async (ingestRundown) => { | ||
| const rundownExternalId = ingestRundown.externalId | ||
| const existingRundown = await this.findRundown(studio._id, playlist._id, rundownExternalId) | ||
| if (!existingRundown) { | ||
| return | ||
| } | ||
|
|
||
| this.checkRundownSource(existingRundown) | ||
|
|
||
| return runIngestOperation(studio._id, IngestJobs.UpdateRundown, { | ||
| rundownExternalId: ingestRundown.externalId, | ||
| ingestRundown: { ...ingestRundown, playlistExternalId: playlist.externalId }, | ||
| isCreateAction: true, | ||
| rundownSource: { | ||
| type: 'restApi', | ||
| resyncUrl: ingestRundown.resyncUrl, | ||
| }, | ||
| }) | ||
| }) |
There was a problem hiding this comment.
putRundowns silently skips non-existent rundowns instead of updating existing ones.
findRundown (line 514) throws a 404 error if the rundown doesn't exist, making the if (!existingRundown) check on line 515 unreachable. This contradicts PUT semantics where the operation should either update or fail explicitly.
If the intent is to skip missing rundowns, use softFindRundown pattern (similar to softFindSegment). If the intent is to fail on missing rundowns, remove the unreachable code.
🐛 Proposed fix (skip missing rundowns)
await Promise.all(
ingestRundowns.map(async (ingestRundown) => {
const rundownExternalId = ingestRundown.externalId
- const existingRundown = await this.findRundown(studio._id, playlist._id, rundownExternalId)
- if (!existingRundown) {
+ const existingRundown = await Rundowns.findOneAsync({
+ $or: [
+ { _id: protectString<RundownId>(rundownExternalId), playlistId: playlist._id, studioId: studio._id },
+ { externalId: rundownExternalId, playlistId: playlist._id, studioId: studio._id },
+ ],
+ })
+ if (!existingRundown) {
return
}
this.checkRundownSource(existingRundown)🤖 Prompt for AI Agents
In `@meteor/server/api/rest/v1/ingest.ts` around lines 511 - 530, The code in
putRundowns currently calls findRundown which throws 404 for missing rundowns,
making the if (!existingRundown) branch unreachable; to implement the intended
"skip missing rundowns" behavior, replace the call to findRundown with
softFindRundown (or the existing soft-find helper) so it returns null/undefined
for non-existent rundowns, keep the if (!existingRundown) return to skip them,
and ensure softFindRundown is imported/used consistently alongside
checkRundownSource and runIngestOperation to update only existing rundowns.
| await Promise.all( | ||
| ingestSegments.map(async (ingestSegment) => { | ||
| const segment = await this.findSegment(rundown._id, ingestSegment.externalId) | ||
| if (!segment) { | ||
| return | ||
| } |
There was a problem hiding this comment.
putSegments has the same unreachable code issue.
findSegment throws on not found, making the null check unreachable. Use softFindSegment instead.
🐛 Proposed fix
await Promise.all(
ingestSegments.map(async (ingestSegment) => {
- const segment = await this.findSegment(rundown._id, ingestSegment.externalId)
+ const segment = await this.softFindSegment(rundown._id, ingestSegment.externalId)
if (!segment) {
return
}🤖 Prompt for AI Agents
In `@meteor/server/api/rest/v1/ingest.ts` around lines 731 - 736, The loop in
putSegments calls this.findSegment(...) which throws when the segment is not
found, making the subsequent if (!segment) check unreachable; replace
this.findSegment with this.softFindSegment so the function returns null on
missing segments and the existing null-check/early-return remains valid. Locate
the ingestSegments.map async callback (the block using findSegment) and swap the
call to use softFindSegment while leaving the rest of the logic and
null-handling intact.
| - name: segmentId | ||
| in: path | ||
| description: Internal or external ID of the Segment the Part to update belongs to. | ||
| schema: | ||
| type: string | ||
| - name: partId | ||
| in: path | ||
| description: Internal or external ID of the Part to update. | ||
| schema: | ||
| type: string |
There was a problem hiding this comment.
Missing required: true on segmentId and partId parameters for PUT part.
Both parameters are path parameters and should be marked as required, consistent with other endpoints.
📝 Fix missing required attributes
- name: segmentId
in: path
description: Internal or external ID of the Segment the Part to update belongs to.
+ required: true
schema:
type: string
- name: partId
in: path
description: Internal or external ID of the Part to update.
+ required: true
schema:
type: string📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: segmentId | |
| in: path | |
| description: Internal or external ID of the Segment the Part to update belongs to. | |
| schema: | |
| type: string | |
| - name: partId | |
| in: path | |
| description: Internal or external ID of the Part to update. | |
| schema: | |
| type: string | |
| - name: segmentId | |
| in: path | |
| description: Internal or external ID of the Segment the Part to update belongs to. | |
| required: true | |
| schema: | |
| type: string | |
| - name: partId | |
| in: path | |
| description: Internal or external ID of the Part to update. | |
| required: true | |
| schema: | |
| type: string |
🤖 Prompt for AI Agents
In `@packages/openapi/api/definitions/ingest.yaml` around lines 860 - 869, The PUT
"part" operation's path parameters named segmentId and partId are missing
required: true; update the parameter objects for segmentId and partId in the PUT
part operation (the path parameters shown in the diff) to include required: true
so they are treated as mandatory path parameters, matching the style of other
endpoints.
| rundownNotFound: | ||
| description: The specified Rundown does not exist. | ||
| content: | ||
| application/json: | ||
| schema: | ||
| type: object | ||
| properties: | ||
| status: | ||
| type: number | ||
| const: 404 | ||
| example: 404 | ||
| message: | ||
| type: string | ||
| example: The specified Rundown was not found. | ||
| required: | ||
| - status | ||
| - notFound | ||
| - message | ||
| additionalProperties: false |
There was a problem hiding this comment.
Schema inconsistency: notFound field in required but not in properties.
The rundownNotFound, segmentNotFound, and partNotFound response schemas list notFound as required but don't define it in properties. This will cause schema validation issues.
📝 Fix for rundownNotFound (apply similar fix to segmentNotFound and partNotFound)
rundownNotFound:
description: The specified Rundown does not exist.
content:
application/json:
schema:
type: object
properties:
status:
type: number
const: 404
example: 404
+ notFound:
+ type: string
+ const: rundown
+ example: rundown
message:
type: string
example: The specified Rundown was not found.
required:
- status
- notFound
- message
additionalProperties: false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| rundownNotFound: | |
| description: The specified Rundown does not exist. | |
| content: | |
| application/json: | |
| schema: | |
| type: object | |
| properties: | |
| status: | |
| type: number | |
| const: 404 | |
| example: 404 | |
| message: | |
| type: string | |
| example: The specified Rundown was not found. | |
| required: | |
| - status | |
| - notFound | |
| - message | |
| additionalProperties: false | |
| rundownNotFound: | |
| description: The specified Rundown does not exist. | |
| content: | |
| application/json: | |
| schema: | |
| type: object | |
| properties: | |
| status: | |
| type: number | |
| const: 404 | |
| example: 404 | |
| notFound: | |
| type: string | |
| const: rundown | |
| example: rundown | |
| message: | |
| type: string | |
| example: The specified Rundown was not found. | |
| required: | |
| - status | |
| - notFound | |
| - message | |
| additionalProperties: false |
🤖 Prompt for AI Agents
In `@packages/openapi/api/definitions/ingest.yaml` around lines 978 - 996, The
schema lists "notFound" in the required array but doesn't define it in
properties for the response schemas rundownNotFound, segmentNotFound, and
partNotFound; remove "notFound" from each required list (leave only "status" and
"message") or alternatively add a corresponding "notFound" property definition
under properties (e.g., type: boolean or string) so required matches
properties—update the YAML for rundownNotFound, segmentNotFound, and
partNotFound accordingly.
| rank: | ||
| type: number | ||
| description: The position of the Segment in the parent Rundown. | ||
| inclusiveMinimum: 0.0 | ||
| example: 1 |
There was a problem hiding this comment.
Invalid OpenAPI keyword: inclusiveMinimum should be minimum.
OpenAPI 3.x uses minimum (inclusive by default) and exclusiveMinimum: true for exclusive bounds. inclusiveMinimum is not a valid keyword.
📝 Fix keyword
rank:
type: number
description: The position of the Segment in the parent Rundown.
- inclusiveMinimum: 0.0
+ minimum: 0
example: 1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| rank: | |
| type: number | |
| description: The position of the Segment in the parent Rundown. | |
| inclusiveMinimum: 0.0 | |
| example: 1 | |
| rank: | |
| type: number | |
| description: The position of the Segment in the parent Rundown. | |
| minimum: 0 | |
| example: 1 |
🤖 Prompt for AI Agents
In `@packages/openapi/api/definitions/ingest.yaml` around lines 1172 - 1176, The
OpenAPI schema for the numeric property "rank" uses the invalid keyword
"inclusiveMinimum"; update the schema by replacing "inclusiveMinimum: 0.0" with
the valid OpenAPI keyword "minimum: 0.0" (retain type: number, description, and
example), and if an exclusive bound is ever needed use "exclusiveMinimum: true"
alongside "minimum" instead of "inclusiveMinimum".
| test('Can delete multiple playlists', async () => { | ||
| const result = await ingestApi.deletePlaylists({ studioId }) | ||
| expect(result).toBe(undefined) | ||
| }) | ||
|
|
||
| test('Can delete playlist by id', async () => { | ||
| const result = await ingestApi.deletePlaylist({ | ||
| studioId, | ||
| playlistId: playlistIds[0], | ||
| }) | ||
| expect(result).toBe(undefined) | ||
| }) |
There was a problem hiding this comment.
Test sequence may fail on empty playlistIds after bulk delete.
The 'Can delete multiple playlists' test (lines 56-59) runs before 'Can delete playlist by id' (lines 61-67). After the bulk delete, playlistIds[0] may reference a deleted playlist, causing the subsequent test to fail with a 404. Consider restructuring tests to avoid this dependency or using separate test data.
🤖 Prompt for AI Agents
In `@packages/openapi/src/__tests__/ingest.spec.ts` around lines 56 - 67, The two
tests depend on the same playlistIds array causing a race where deletePlaylists
removes items and makes playlistIds[0] invalid for deletePlaylist; update tests
so they don't share mutated state: either run the 'Can delete playlist by id'
test before calling ingestApi.deletePlaylists, or better, create a fresh
playlist (or repopulate playlistIds) inside the 'Can delete playlist by id' test
and then call ingestApi.deletePlaylist with that new id; refer to the test names
and the ingestApi.deletePlaylists / ingestApi.deletePlaylist calls and the
playlistIds array when making the change.
| test('Can update a part', async () => { | ||
| newIngestPart.name = newIngestPart.name + ' added' |
There was a problem hiding this comment.
Potential TypeError if newIngestPart is undefined.
newIngestPart is only assigned in the 'Can request part by id' test (line 331). If that test fails or is skipped, accessing newIngestPart.name here will throw a TypeError. Add a guard or assertion.
🛡️ Proposed fix
test('Can update a part', async () => {
+ if (!newIngestPart) throw new Error('newIngestPart was not initialized by prior test')
newIngestPart.name = newIngestPart.name + ' added'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test('Can update a part', async () => { | |
| newIngestPart.name = newIngestPart.name + ' added' | |
| test('Can update a part', async () => { | |
| if (!newIngestPart) throw new Error('newIngestPart was not initialized by prior test') | |
| newIngestPart.name = newIngestPart.name + ' added' |
🤖 Prompt for AI Agents
In `@packages/openapi/src/__tests__/ingest.spec.ts` around lines 403 - 404, The
'Can update a part' test uses newIngestPart.name but newIngestPart is only set
in the 'Can request part by id' test, so add a safety check: assert
newIngestPart is defined (or throw a clear error) before mutating it in the 'Can
update a part' test, or skip/mark the test as dependent if unset; specifically,
modify the 'Can update a part' test to verify newIngestPart is not
undefined/null (e.g., with a guard or expect(newIngestPart).toBeDefined())
before doing newIngestPart.name = ... to avoid a TypeError when newIngestPart is
missing.
About the Contributor
This PR is posted on behalf of EVS Broadcast Equipment.
Type of Contribution
This is a: Feature
State of the work offered
The feature - What, why and how
An HTTP REST API for creating and updating rundown data.
This allow standard web clients to connect to Sofie to serve rundowns and rundown content, taking on the role that traditional NRCSes have in existing workflows. Sofie is already well covered on traditional workflows with the MOS workflows. This addition of HTTP REST API represents a step in the direction of open and web-centric systems which will further enable integrations and smooth workflows.
The implementation follows established patterns and is implemented in two separate parts: an OpenAPI definition and one part in the existing REST API implementation. A summary of the changes is listed at the end of this document.
Example use cases
Example 1: HTTP-based NRCS/CMS
*not comparable to the full blown DDP-based PeripheralDevice gateways.
Example 2: Headless Sofie
Topologies
Design, considerations and choices made
RESTful principles are followed
The verbs GET, POST, PUT, DELETE are used.
Each of Playlists, Rundowns, Segments and Parts are made available as resources, following this pattern:
/ingest/{studioId}/playlists/{playlistId}/rundowns/{rundownId}/segments/{segmentId}/parts/{partId}where, for example,/ingest/{studioId}/playlistsis a collection and/ingest/{studioId}/playlists/{playlistId}is a resource in the Playlists collection.Calling
DELETEon a resource will always also remove any resources owned by the deleted resource. Calling PUT on a collection will replace all resources of the type under that collection - i.e. when an array of e.g. Segments is received for a Rundown, if there are any already existing Segments within the Rundown that are not included in the received array then those Segments will be deleted. In this way, the entire contents of a Rundown can be replaced by sending a new array of Segments via a singlePUT.The
playlistId,rundownId,segmentId, andpartIdcan be any unique ID of the external system's choosing, so long as they are consistent for a particular resource.20x Response codes
20x OK means the request itself is valid and the system goes off doing the asynchronous ingest operation - which can later fail, with no way of communicating this back to the external system.
There is an opt-in mechanism for Blueprints validation of payloads during processing of the request. If specified, a Blueprint function can do synchronous validation before Core responds on the request. A failed validation in this Blueprints-hook results in a 40x error.
Even after a successful validation by Blueprints, the Core ingest job will perform asynchronously and might fail in silence after having reported 20x back to the external client.
Playlists lifespans in relation to Rundowns
playlistExternalIdare merged into one Playlist, and Rundowns with uniqueplaylistExternalIds live in discrete Playlists.GETandDELETEmethods for Playlists, but noPOSTorPUT.The Resync mechanism
MOS allows Sofie to request an update of all rundown data. This user-facing feature is intended to allow users to refresh the rundown data if they observe something is outdated or wrong.
The same mechanism is implemented for this HTTP Ingest API:
ExternalIds
With exception of the
studioId. the external system creates and owns all IDs in the/ingest/{studioId}/playlists/{playlistId}/rundowns/{rundownId}/segments/{segmentId}/parts/{partId}structure. These IDs map to externalId in the Sofie database.For the external system to be able to control Sofie (Use case 2, "Headless Sofie"), it needs to be able to do rundown playout operations in an efficient way. Currently, the User Actions API expects internalIds for its operations. This could have meant that the external system would need to query back all content it has created, in order to create and store a map of the relation between external and internal IDs in memory, to be able to send any playout command, which wouldn't be pleasant to work with as a consumer of the APIs.
For this reason, we have simplified this by allowing either internalId or externalId to be used interchangeably on the User Actions API. This greatly simplifies/enables the described Use case 2, "Headless Sofie".
Rundown, Segment, and Part models
Rundowns, Segments, and Parts are modelled to match common public properties and database structure.
The Part payload property is un-typed, and will match the Blueprint's expectation to incoming content needed to create Parts with Pieces.
The structure allows for a closed loop where Blueprints define the schemas, which external systems use to generate the requests with typed payload, which ultimately gets synchronously parsed and validated during ingest by Blueprints again.
Security and auth is excluded from the scope of this PR
Following existing patterns/not unique for this Ingest API addition.To be dealt with elsewhere in the stack or in a more global matter.
Summary of changes
Design: packages/openapi/
Implementation: meteor/server/api/rest/v1/ingest.ts
./index.tsthrough the establishedregister*Router(sofieAPIRequest).runIngestOperation()) in the same way as the existing ingestMosDevice.httpIngest(rundownSource type httpIngest), which can be confusing considering the existing non-REST "http ingest api" and rundownSource type "http".packages/blueprints-integration/src/api/studio.tsSide-effect: packages/shared-lib/src/peripheralDevice/ingest.ts
Side-effect: Job worker playout implementation
See the paragraph on "ExternalIds"
Changes to lookup of playlists, rundowns, segments and parts in the following places:
Testing
Status
HTTP REST Ingest API Implementation
Overview
This PR introduces a comprehensive HTTP REST Ingest API for the Sofie system, enabling standard web clients (NRCS, CMS, and gateway systems) to serve rundown data and content. The feature provides full CRUD operations for Playlists, Rundowns, Segments, and Parts with structured HTTP endpoints, OpenAPI definitions, and integrated blueprint validation.
Core Components Added
API Implementation Layer
IngestServerAPI class (
meteor/server/api/rest/v1/ingest.ts): ~1,710 lines implementing complete CRUD operations with:REST Interface Definitions (
meteor/server/lib/rest/v1/ingest.ts): Type contracts defining:OpenAPI Specification
OpenAPI Definitions (
packages/openapi/api/definitions/ingest.yaml): ~1,283 lines defining:Path Definitions (
packages/openapi/api/actions.yaml): Registers eight new ingest paths referencing the comprehensive definitions fileAPI Test Suite (
packages/openapi/src/__tests__/ingest.spec.ts): ~454 lines of end-to-end tests validating all CRUD operations across resourcesAPI Structure & Endpoints
All endpoints follow the pattern:
/ingest/{studioId}/playlists/{playlistId}/rundowns/{rundownId}/segments/{segmentId}/parts/{partId}Design Patterns
Payload Validation
meteor/server/api/rest/v1/typeConversion.ts):packages/blueprints-integration/src/api/studio.ts) with optional validation methods:Entity Resolution
meteor/server/api/rest/v1/playlists.ts):Source Management
packages/corelib/src/dataModel/Rundown.ts) with:meteor/server/api/ingest/actions.ts) handle restApi source reloads via POST to resyncUrl with error loggingExternal ID Support
packages/shared-lib/src/peripheralDevice/ingest.ts)packages/job-worker/src/ingest/runOperation.ts)Integration Points
meteor/server/api/rest/v1/index.ts)Key Characteristics
Test Coverage
Comprehensive test suite covers all CRUD operations including:
Code Quality Notes