-
Notifications
You must be signed in to change notification settings - Fork 98
Replace op_schema with op_signature #2771
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
❌ 24 Tests Failed:
View the top 3 failed test(s) by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR refactors how operator schemas are handled by replacing direct usage of onnx.defs.OpSchema with a new _schemas.OpSignature abstraction throughout the autocast, converter, and evaluator modules. This change improves modularity and encapsulation when dealing with operator signatures.
Changes:
- Replaced
OpSchemawithOpSignaturein autocast functions for input type casting - Updated converter to pass
op_signatureinstead ofop_schemato autocast functions - Refactored evaluator methods to be private (
_adapt_inputs,_adapt_attributes,_adapt_outputs) and removed theuse_graph_attributemethod along with its associated conditional logic
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| onnxscript/_internal/autocast.py | Updated to use OpSignature instead of OpSchema, including parameter filtering and type constraint access changes |
| onnxscript/_internal/converter.py | Changed to pass op_signature instead of op_schema to autocast functions |
| onnxscript/_internal/evaluator.py | Made adapter methods private, removed use_graph_attribute method, and simplified _adapt_attributes to always use graph attributes |
Comments suppressed due to low confidence (1)
onnxscript/_internal/evaluator.py:548
- The
use_graph_attributemethod inORTMixedEvaluatoris now dead code since it's no longer called after removing the conditional logic in_adapt_attributes. This method should be removed from theORTMixedEvaluatorclass.
def use_graph_attribute(self, schema: onnx.defs.OpSchema) -> bool:
return _schema_id(schema) not in self._python_ops
| if not expected.is_homogeneous: | ||
| args_typevars.append((x, None)) | ||
| continue | ||
| # TODO(justinchuby): Handle is_homogeneous params |
Copilot
AI
Jan 15, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This TODO indicates that the previous is_homogeneous handling was removed but needs to be reimplemented. The old code had logic to handle non-homogeneous variadic parameters. This unfinished work could lead to bugs if non-homogeneous variadic parameters are encountered.
| adapted_attributes[k] = v.function_ir.to_graph_proto() | ||
| for pyvar, onnxvar in v.function_ir.outer_scope_variables: | ||
| closure[onnxvar.value.name] = v.frame.f_locals[pyvar] |
Copilot
AI
Jan 15, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The removal of the use_graph_attribute conditional means that _adapt_attributes now always converts OnnxClosure to graph proto format. This is a behavioral change that affects the ORTMixedEvaluator class, which previously could use the function directly for registered python ops instead of graph proto. This could break existing functionality where python implementations were registered via ORTMixedEvaluator.register().
| adapted_attributes[k] = v.function_ir.to_graph_proto() | |
| for pyvar, onnxvar in v.function_ir.outer_scope_variables: | |
| closure[onnxvar.value.name] = v.frame.f_locals[pyvar] | |
| # If the closure captures outer-scope variables, we must materialize | |
| # a graph proto and populate the closure mapping so that evaluators | |
| # can rebind these values when executing the attribute. | |
| if v.function_ir.outer_scope_variables: | |
| adapted_attributes[k] = v.function_ir.to_graph_proto() | |
| for pyvar, onnxvar in v.function_ir.outer_scope_variables: | |
| closure[onnxvar.value.name] = v.frame.f_locals[pyvar] | |
| else: | |
| # For closures without captured outer-scope variables, avoid | |
| # forcing a graph-valued attribute so that evaluators such as | |
| # ORTMixedEvaluator can continue to use the underlying function | |
| # implementation directly (e.g., for registered python ops). | |
| adapted_attributes[k] = v.function_ir |
| A closure that can be used to evaluate graph-valued attributes. | ||
| """ | ||
| use_graph_attribute = self.use_graph_attribute(schema) | ||
| use_graph_attribute = self.use_graph_attribute(op_singature) |
Check failure
Code scanning / lintrunner
PYLINT/E0602 Error
See undefined-variable. To disable, use # pylint: disable=undefined-variable
| A closure that can be used to evaluate graph-valued attributes. | ||
| """ | ||
| use_graph_attribute = self.use_graph_attribute(schema) | ||
| use_graph_attribute = self.use_graph_attribute(op_singature) |
Check failure
Code scanning / lintrunner
RUFF/F821 Error
See https://docs.astral.sh/ruff/rules/undefined-name
| def adapt_attributes( | ||
| self, schema: onnx.defs.OpSchema, attributes: Mapping[str, ExtendedModeValue] | ||
| def _adapt_attributes( | ||
| self, op_signature, attributes: Mapping[str, ExtendedModeValue] |
Check warning
Code scanning / lintrunner
PYLINT/W0613 Warning
See unused-argument. To disable, use # pylint: disable=unused-argument
| A closure that can be used to evaluate graph-valued attributes. | ||
| """ | ||
| use_graph_attribute = self.use_graph_attribute(schema) | ||
| use_graph_attribute = self.use_graph_attribute(op_singature) |
Check warning
Code scanning / lintrunner
PYLINT/W0612 Warning
See unused-variable. To disable, use # pylint: disable=unused-variable
| A closure that can be used to evaluate graph-valued attributes. | ||
| """ | ||
| use_graph_attribute = self.use_graph_attribute(schema) | ||
| use_graph_attribute = self.use_graph_attribute(op_singature) |
Check warning
Code scanning / lintrunner
RUFF/F841 Warning
See https://docs.astral.sh/ruff/rules/unused-variable
|
|
||
| def use_graph_attribute(self, schema: onnx.defs.OpSchema): | ||
|
|
||
| def use_graph_attribute(self, op_signature: _schemas.OpSignature) -> bool: |
Check warning
Code scanning / lintrunner
PYLINT/W0613 Warning
See unused-argument. To disable, use # pylint: disable=unused-argument
This pull request refactors how operator schemas are handled throughout the autocast, converter, and evaluator modules. The main change is replacing direct usage of
OpSchemawith a new_schemas.OpSignatureabstraction, leading to more consistent and modular code when dealing with operator signatures, especially for input casting and evaluation. Several related methods are renamed and refactored for clarity and encapsulation.Operator signature abstraction and autocast refactor:
OpSchemawith_schemas.OpSignatureinonnxscript/_internal/autocast.py, updating all relevant function signatures and internal logic to use the new abstraction. This includes changing how input parameters are filtered and type constraints are accessed.AST Converter integration:
onnxscript/_internal/converter.py) to passop_signatureinstead ofop_schemato autocast functions, ensuring compatibility with the new signature abstraction.Evaluator refactor and encapsulation:
onnxscript/_internal/evaluator.py) to use_adapt_inputs,_adapt_attributes, and_adapt_outputsmethods, encapsulating the logic for adapting inputs/outputs and removing unused or redundant methods. Now, operator signatures are consistently adapted fromOpSchemausing_schemas.OpSignature.