Merge development (with our current Datawarehouse code) into the AWS branch.#232
Merge development (with our current Datawarehouse code) into the AWS branch.#232jgrantr wants to merge 70 commits intofeature/aws-sdk-v3-againfrom
development (with our current Datawarehouse code) into the AWS branch.#232Conversation
…onnectors into redshift-load-optimization
…oading into the DW and then having a processing failure
…onnectors into redshift-load-optimization
…. It now checks for null and undefined
S3 entity table loading/unloading
Redshift load optimization
ES-2516 - reset deleted flag on update or insert
…fixes ES-2516 - don't write deletes to the CSV file/staging table
⛔ Snyk checks have failed. 31 issues have been found so far.
Up to 10 code/snyk issues appear as inline comments below; view the rest through the details page.💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse. |
There was a problem hiding this comment.
Bug: Fixed: Correct Writable Property Name Used
Incorrect property check for stream writability. The code checks errorStream.Writable (capital W), but Node.js streams use the lowercase property writable (lowercase w) to check if a stream is writable. This check will always be falsy, causing the stream to never be properly closed. Should be errorStream.writable instead of errorStream.Writable.
common/datawarehouse/load.js#L317-L318
connectors/common/datawarehouse/load.js
Lines 317 to 318 in 1a4a4c8
| * @param error {string} | ||
| */ | ||
| function handleFailedValidation (ID, source, eventObj, error) { | ||
| function handleFailedValidation(ID, source, eventObj, error) { |
There was a problem hiding this comment.
Bug: Incorrect Case-Sensitive Property Check Breaks Error Stream Initialization
Same incorrect property check as above. The code checks !errorStream.Writable but should check !errorStream.writable (lowercase w). This will cause the error stream initialization logic to be executed every time handleFailedValidation is called, potentially creating multiple pipelines for the same error stream.
Note
Adds Redshift S3 load pipeline and hashed surrogate key support to the Postgres connector, improves DW delete/SCD handling, and introduces a release publish workflow.
streamToTableFromS3) and Redshift-specific staging (DISTSTYLE ALL, SORTKEY), with sort key discovery and COPY from S3.bigintfor SKs/dimension FKs when hashed keys, adjust default timestamps and index creation per engine.combine.js: merge logic treats delete on either side as authoritative, simplifying updates vs. deletes.load.js: route validation errors to Leo streams pipeline; minor stream/signature tweaks.common,entity-table, andpostgrespackages.Written by Cursor Bugbot for commit 1a4a4c8. This will update automatically on new commits. Configure here.