Skip to content

Conversation

@flarco
Copy link
Collaborator

@flarco flarco commented Jan 15, 2026

Sling CLI v1.5.5 Release Notes

New Features

  • DB2 Database Support - Initial support for IBM DB2 with connection templates and full test suite
  • definition-only Mode - Create table/file definitions without transferring data (parquet/arrow only for files)
  • _sling_synced_at Column - Enable with SLING_SYNCED_AT_COLUMN=true for timestamp tracking
  • _sling_synced_op Column - Tracks operation type: I (insert), U (update), D (soft delete)
  • slugify Function - Convert strings to URL-friendly slugs

Bug Fixes

  • DDL primary key with WITH clauses - Fixed incorrect placement when table options present (#694)
  • Prometheus streaming deadlock - Fixed stall on large time ranges (#700)
  • MySQL boolean CSV streaming - Fixed boolean handling in LOAD DATA LOCAL INFILE
  • delete_missing with transforms - Fixed errors in PK-only queries when transforms present
  • Connection caching - Fixed hash method and ODBC initialization

Improvements

  • Go 1.25 - Upgraded build toolchain
  • Docker ODBC packages - Added unixodbc and odbcinst
  • Oracle XMLTYPE - Support for BigQuery transfers
  • MySQL LOAD DATA LOCAL - Enhanced NULL value handling
  • S3 multi-bucket access - Improved support in replications
  • Schema discovery - Thread-safe schemata merging, column-level support

flarco and others added 30 commits January 6, 2026 12:19
- refactor `AddPrimaryKeyToDDL` to accurately locate the closing parenthesis
  of column definitions, ensuring `PRIMARY KEY` is inserted before any
  table options like `WITH` clauses (fixes #694)
- previously, the primary key could be incorrectly appended inside the
  `WITH` clause, resulting in invalid SQL DDL

test(replication): add test for custom table_ddl with primary key and with clause

- introduce `r.88.table_ddl_with_clause.yaml` to validate the fix for
  GitHub Issue #694 in an end-to-end replication scenario
- verify that the target table is created with the specified primary key
  and table options (e.g., `DATA_COMPRESSION`)
- add comprehensive unit tests for `AddPrimaryKeyToDDL` covering DDLs
  with and without `WITH` clauses, nested parentheses, and different dialects
- introduce `definition-only` mode for creating table and file definitions without data
- support `definition-only` mode for database targets (e.g., postgres to mssql)
- support `definition-only` mode for file targets (e.g., postgres to parquet/arrow)
- inject `WHERE 1=0` into source queries to prevent data transfer
- skip empty buffer check for file writes in `definition-only` mode
- add validation for file targets, restricting to parquet and arrow formats
- add cli tests for database and file targets
- add cli test for `definition-only` failure with csv file target
- introduce `db2` type in `dbio` for specific DB2 handling
- add `db2.yaml` template with core SQL operations and metadata queries
- implement `GetType` method in `connection` to return more accurate type for ODBC connections
- update `connection_local` to use `GetType` for better connection descriptions
- modify `database` package to use `Template().Quote` for consistent quoting
- enhance `GenerateDDL` to support `partition by`, `cluster by`, `distkey`, `sortkey`, and `primary key` clauses
- update `GenerateMergeConfig` to include `src_insert_fields`, `tgt_fields`, and `placeholder_fields` for merge statements
- adjust `GetOptimizeTableStatements` for column type optimization using `Template().Quote`
- update `CompareChecksums` to use `Template().Quote` for field quoting
- modify `AddMissingColumns` to use `Template().Quote` for column quoting
- update `dbio_types` to support `column_upper` variable from templates for case normalization
- add `TestSuiteDatabaseDB2` to run tests for DB2 database
- update `go.mod` and `go.sum` for `godbc` dependency
…enhance MySQL/MariaDB connection for local infile support
When streaming data from Prometheus with large time ranges (> 1 hour),
the StreamRowsChunked function would stall after ~100 rows. This was
caused by the bwRows channel (buffer size 100) filling up because
processBwRows goroutine was never started.

The issue occurred because StreamRowsChunked creates a datastream with
NewDatastreamContext() and pushes rows directly, bypassing Start()
which normally starts the processBwRows goroutine.

Changes:
- Add StartBwProcessor() public method to Datastream to start the
  bytes-written processor independently
- Call StartBwProcessor() in StreamRowsChunked before pushing rows

This ensures the bwRows channel is drained, preventing the producer
from blocking when the buffer fills up.
…bucket access, and delete_missing with transforms
fix: prevent deadlock in Prometheus chunked streaming
@flarco flarco merged commit a01a07f into main Jan 26, 2026
1 check passed
@flarco flarco deleted the v1.5.5 branch January 26, 2026 15:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants