Oracle Table Input with variables (${LAST_RUN_TS}, ${LAST_RUN_ID}) failing in Apache Hop #6250
Replies: 25 comments 10 replies
-
|
Your parameterized query and the screenshot bot look correct, but there could be subtle issues that we're missing from the limited information here.
That's because that is what happens. The variables in your query are parsed before the query is sent to the query as a sql string. The variable doesn't prevent any SQL injection or similar. Try hovering over your variables to see what their exact values are, write your parsed SQL to the logs with a Write To Leg etc to troubleshoot and to make sure the SQL is exactly what you expect it to be. An alternative approach could be to pass data from fields with the "insert data from transform" option, but you'll need to prepare your input stream correctly in that case. Check the docs and the examples in the samples project. What you're trying to do is a common use case and is widely used, it shouldn't be too much of a problem to get it to work. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
2025/12/23 11:20:01 - Hop - Starting workflow |
Beta Was this translation helpful? Give feedback.
-
|
This seems like a different error then the one you were reporting? Are you using a table output to insert in the MySQL table? If so we will need a reproduction path of that error. The table definition (DDL) of the table and an example row that is causing that error. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
|
Without the row generation and JavaScript substitution logic, the simple table input/output works perfectly fine. However, once those parts are introduced, dummy records start getting inserted into the movements. |
Beta Was this translation helpful? Give feedback.
-
|
Table definition alone won't help us, we will need a pipeline with a row (datagrid) and a table output that reproduces that error. |
Beta Was this translation helpful? Give feedback.
-
|
Also, you left the Generate Rows to 10 (the default), you almost certainly need a single row. It's not 100% clear from what you shared, but at least you should only get like the one below only once, that could make it easier to debug.
|
Beta Was this translation helpful? Give feedback.
-
|
converting this to a discussion, since this doesn't seem to be a bug or a feature request. |
Beta Was this translation helpful? Give feedback.
-
|
At the risk of pointing the obvious.... why in the world are you using the JS step to do that? If they are not landing exactly as expected then land the variables as rows with a get variables, bash them into the correct format with a calc step, filter with a select to keep what you need in the exact order you need it and feed it into the table input as ? substitutions. Also it's likely working intermittently because, to enable the streaming of data, in pipelines everything happens at the same time. You need to be aware of that and design for it. |
Beta Was this translation helpful? Give feedback.
-
|
Good question — this is actually the key distinction. What you’re running into isn’t a limitation of Table Input, it’s when the values are being resolved. LAST_RUN_TS and LAST_RUN_ID are being treated as compile-time variables, not row-level runtime values, which is why they don’t resolve when embedded directly in the query. JavaScript “works” only because it forces evaluation earlier — but that’s also why it’s fragile. Instead of JS substitution, the reliable approach is to pass these as data, not variables. Suggested pattern: Use Get Variables (or a Table Input / Row Generator) to land Use Calculator / Select Values to: enforce correct data types format timestamp explicitly (no implicit casting) In Table Input, enable Replace variables in script? = No Use ? placeholders in the SQL and let the incoming row bind them at runtime Example: As long as the incoming stream has: column 1 → timestamp column 2 → number the engine will bind them correctly at execution time. The intermittent behavior you’re seeing with JS is due to pipeline streaming — variable resolution and query execution can happen out of order unless everything is serialized into the same row flow. Bottom line: Happy to walk through a concrete example if needed. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @dmainou, As discussed, I’ve been spending additional time exploring the suggested approach and evaluating alternate options as well. However, I’m still not getting a clear resolution. Since management is keen to close this out at the earliest, I believe a quick discussion would help align on the right direction. If possible, can we connect for 30 minutes to walk through this together? I’m available within the next 30 minutes as well, if that works for you. Thanks, |
Beta Was this translation helpful? Give feedback.
-
|
Thanks a lot for the guidance so far. I now understand that most of the things I'm trying to do can be achieved using the tableinput-accept-input.hpl or tableinput-variables.hpl sample pipelines. I also completely get that this is a holiday season for many people around the world, and community support is voluntary. No pressure at all — I genuinely appreciate the help being offered here. Since my use case may get more complex as I continue working on it, I’d love to know if there are any additional suggestions or best practices you recommend for structuring a more advanced solution. Wishing you and the whole team a wonderful Christmas and New Year! |
Beta Was this translation helpful? Give feedback.
-
|
Thanks, I understand your point. I’ve already tried timestamp validation from my end in multiple ways. If anyone can share a proper guideline or relevant reference material, that would really help me move forward. |
Beta Was this translation helpful? Give feedback.
-
|
Query used: Problem: FROM_TS and TO_TS are defined as TIMESTAMP columns. When queried directly as TIMESTAMP, values do not appear in preview / result set in Table input. However, when I explicitly cast them to STRING (or VARCHAR), the values show correctly in the preview. Example: SELECT Questions: Is this a known issue with timestamp handling in preview/result rendering? Could this be related to timezone, session settings, or internal conversion factors? Is there a recommended way to handle TIMESTAMP columns in control tables to avoid this behavior? |
Beta Was this translation helpful? Give feedback.
-
|
What I’ve already tried: Disable / enable timestamp option Session refresh and re-started apache-2.16.0 again.. Re-executing query
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
It looks like you\re still struggling with variable replacement, and are now adding potential timestamp issues into the mix. Can you retrieve your timestamp values correctly if you use hard coded parameter values or just try to get 5 or so records without any parameters? |
Beta Was this translation helpful? Give feedback.
-
|
I want to clarify and consolidate my earlier post. SELECT Note: Despite the column names, both FROM_TS and TO_TS are defined as TIMESTAMP in the database. When I try to preview records for this query, no records are returned and returned as null. There is no transformation or logic applied — it’s a direct select from the database. Timestamp handling option is enabled (also tried disable/enable, no change). Question: Since this is a direct database read and not a transformation issue: Is any implicit conversion or formatting required for TIMESTAMP columns to appear in preview? Is this expected behavior due to preview engine limitations? Is there a known issue with TIMESTAMP data types in control tables? |
Beta Was this translation helpful? Give feedback.
-
|
but changing the database schema or data type is not a feasible approach for us.Instead of modifying the database, are there any tool-level or session-level options available to handle this properly? |
Beta Was this translation helpful? Give feedback.
-
|
As suggested, I tested the full table data retrieval; however, even the corresponding timestamp is not being retrieved. |
Beta Was this translation helpful? Give feedback.
-
|
The same query works fine in DBeaver and returns the expected records. Additionally, data is retrieved correctly when SYSTIMESTAMP is used as the default parameter, but the issue occurs when the timestamp value is fetched from the database. |
Beta Was this translation helpful? Give feedback.
-
|
Could you create a ticket for this.
|
Beta Was this translation helpful? Give feedback.
-
|
Hop version-2.16.0 ); |
Beta Was this translation helpful? Give feedback.
-
|
Issue Id:#6279 |
Beta Was this translation helpful? Give feedback.








Uh oh!
There was an error while loading. Please reload this page.
-
Hi Hop Community,
I’m trying to implement incremental data migration from Oracle to MySQL using Apache Hop.
Goal:
Oracle query (in Table Input):
SELECT *
FROM APPS.XXSIFY_CPQ_ORDER_DETAILS_UDM
WHERE
(
CREATED_ON > TO_TIMESTAMP('${LAST_RUN_TS}', 'YYYY-MM-DD HH24:MI:SS.FF')
)
OR
(
CREATED_ON = TO_TIMESTAMP('${LAST_RUN_TS}', 'YYYY-MM-DD HH24:MI:SS.FF')
AND ID > ${LAST_RUN_ID}
)
ORDER BY CREATED_ON, ID
What works:
LAST_RUN_TS = 1970-01-01 00:00:00.000000
LAST_RUN_ID = 0
Problem:
Errors observed:
Environment:
Questions:
Any guidance or best practices would be appreciated.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions