Skip to content

[tune](deps): Bump onnxruntime from 1.8.0 to 1.9.0 in /python/requirements/ml#14

Closed
dependabot[bot] wants to merge 1 commit intomasterfrom
dependabot/pip/python/requirements/ml/onnxruntime-1.9.0
Closed

[tune](deps): Bump onnxruntime from 1.8.0 to 1.9.0 in /python/requirements/ml#14
dependabot[bot] wants to merge 1 commit intomasterfrom
dependabot/pip/python/requirements/ml/onnxruntime-1.9.0

Conversation

@dependabot
Copy link

@dependabot dependabot bot commented on behalf of github Sep 25, 2021

Bumps onnxruntime from 1.8.0 to 1.9.0.

Release notes

Sourced from onnxruntime's releases.

ONNX Runtime v1.9.0

Announcements

  • GCC version < 7 is no longer supported
  • CMAKE_SYSTEM_PROCESSOR needs be set when cross-compiling on Linux because pytorch cpuinfo was introduced as a dependency for ARM big.LITTLE support. Set it to the value of uname -m output of your target device.

General

  • ONNX 1.10 support
    • opset 15
    • ONNX IR 8 (SparseTensor type, model local functionprotos, Optional type not yet fully supported this release)
  • Improved documentation of C/C++ APIs
  • IBM Power support
  • WinML - DLL dependency fix supports learning models on Windows 8.1
  • Support for sub-building onnxruntime-extensions and statically linking into onnxruntime binary for custom builds
    • Add --_use_extensions option to run models with custom operators implemented in onnxruntime-extensions

APIs

  • Registration of a custom allocator for sharing between multiple sessions. (See RegisterAllocator and UnregisterAllocator APIs in onnxruntime_c_api.h)
  • SessionOptionsAppendExecutionProvider_TensorRT API is deprecated; use SessionOptionsAppendExecutionProvider_TensorRT_V2
  • New APIs: SessionOptionsAppendExecutionProvider_TensorRT_V2, CreateTensorRTProviderOptions, UpdateTensorRTProviderOptions, GetTensorRTProviderOptionsAsString, ReleaseTensorRTProviderOptions, EnableOrtCustomOps, RegisterAllocator, UnregisterAllocator, IsSparseTensor, CreateSparseTensorAsOrtValue, FillSparseTensorCoo, FillSparseTensorCsr, FillSparseTensorBlockSparse, CreateSparseTensorWithValuesAsOrtValue, UseCooIndices, UseCsrIndices, UseBlockSparseIndices, GetSparseTensorFormat, GetSparseTensorValuesTypeAndShape, GetSparseTensorValues, GetSparseTensorIndicesTypeShape, GetSparseTensorIndices,

Performance and quantization

  • Performance improvement on ARM
    • Added S8S8 (signed int8, signed int8) matmul kernel. This avoids extending uin8 to int16 for better performance on ARM64 without dot-product instruction
    • Expanded GEMM udot kernel to 8x8 accumulator
    • Added sgemm and qgemm optimized kernels for ARM64EC
  • Operator improvements
    • Improved performance for quantized operators: DynamicQuantizeLSTM, QLinearAvgPool
    • Added new quantized operator QGemm for quantizing Gemm directly
    • Fused HardSigmoid and Conv
  • Quantization tool - subgraph support
  • Transformers tool improvements
    • Fused Attention for BART encoder and Megatron GPT-2
    • Integrated mixed precision ONNX conversion and parity test for GPT-2
    • Updated graph fusion for embed layer normalization for BERT
    • Improved symbolic shape inference for operators: Attention, EmbedLayerNormalization, Einsum and Reciprocal

Packages

  • Official ORT GPU packages (except Python) now include both CUDA and TensorRT Execution Providers.
    • Python packages will be updated next release. Please note that EPs should be explicitly registered to ensure the correct provider is used.
  • GPU packages are built with CUDA 11.4 and should be compatible with 11.x on systems with the minimum required driver version. See: CUDA minor version compatibility
  • Pypi
    • ORT + DirectML Python packages now available: onnxruntime-directml
    • GPU package can be used on both CPU-only and GPU machines
  • Nuget
    • C#: Added support for using netstandard2.0 as a target framework
    • Windows symbol (PDB) files are no longer included in the Nuget package, reducing size of the binary Nuget package by 85%. To download, please see the artifacts below in Github.

Execution Providers

  • CUDA EP

... (truncated)

Commits
  • 4daa14b Fixes to rel-1.9.0 to compile and pass for AMD ROCm (#9144)
  • 66b3c31 Final round cherry-picks to 1.9.0 (#9133)
  • b73bc79 Add a pipeline for audio ops (#9102)
  • 83dc225 Second round cherry-pick to rel-1.9.0 (#9062)
  • f202cf3 First round cherry-pick to rel-1.9.0 (#9019)
  • 6fbd0a8 Change cmake_cuda_architectures to double quotes (#8990)
  • 5ae4c54 Fix bug for validating GPU packages (#8997)
  • a30d9f5 fix windows gpu pipelines that use cuda 10.2 (training, reduced_ops and 10.2 ...
  • 4505243 [js/web] WebAssembly profiling (#8932)
  • 0193490 ReduceMin - add int64 cuda kernel support for opset12/13 (#8966)
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [onnxruntime](https://github.com/microsoft/onnxruntime) from 1.8.0 to 1.9.0.
- [Release notes](https://github.com/microsoft/onnxruntime/releases)
- [Changelog](https://github.com/microsoft/onnxruntime/blob/master/docs/ReleaseManagement.md)
- [Commits](microsoft/onnxruntime@v1.8.0...v1.9.0)

---
updated-dependencies:
- dependency-name: onnxruntime
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Sep 25, 2021
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Dec 11, 2021

Superseded by #25.

@dependabot dependabot bot closed this Dec 11, 2021
@dependabot dependabot bot deleted the dependabot/pip/python/requirements/ml/onnxruntime-1.9.0 branch December 11, 2021 08:21
xychu pushed a commit that referenced this pull request Oct 8, 2023
Upgrade typepy, typepy 1.3.1 is broken

https://buildkite.com/ray-project/release-tests-branch/builds/2225#018afbb2-428e-4364-b26c-7c49052edd26

#14 80.99 ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==. These do not:
--
  | 2023-10-05 07:42:44 PDT | #14 80.99     typepy<2,>=1.2.0 from https://files.pythonhosted.org/packages/f1/10/0d6dc654bb4e0eca017bbaf43a315b464c888576a68a2883cd4a74bd1b6b/typepy-1.3.2-py3-none-any.whl (from tabledata==1.3.1->-r requirements_ml_byod_3.9.txt (line 2259))

Test:
- CI
- Install requirements_ml_byod_3.9.txt locally

Signed-off-by: can <can@anyscale.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants