Skip to content

fix(deps): update dependency llama-index to ^0.14.0#2736

Open
renovate[bot] wants to merge 1 commit intomainfrom
renovate/llama-index-0.x
Open

fix(deps): update dependency llama-index to ^0.14.0#2736
renovate[bot] wants to merge 1 commit intomainfrom
renovate/llama-index-0.x

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Nov 25, 2025

This PR contains the following updates:

Package Change Age Confidence
llama-index ^0.13.0^0.14.0 age confidence

Release Notes

run-llama/llama_index (llama-index)

v0.14.12

Compare Source

llama-index-callbacks-agentops [0.4.1]
llama-index-core [0.14.12]
  • Feat/async tool spec support (#​20338)
  • Improve MockFunctionCallingLLM (#​20356)
  • fix(openai): sanitize generic Pydantic model schema names (#​20371)
  • Element node parser (#​20399)
  • improve llama dev logging (#​20411)
  • test(node_parser): add unit tests for Java CodeSplitter (#​20423)
  • fix: crash in log_vector_store_query_result when result.ids is None (#​20427)
llama-index-embeddings-litellm [0.4.1]
  • Add docstring to LiteLLM embedding class (#​20336)
llama-index-embeddings-ollama [0.8.5]
  • feat(llama-index-embeddings-ollama): Add keep_alive parameter (#​20395)
  • docs: improve Ollama embeddings README with comprehensive documentation (#​20414)
llama-index-embeddings-voyageai [0.5.2]
llama-index-graph-stores-nebula [0.5.1]
  • feat(nebula): add MENTIONS edge to property graph store (#​20401)
llama-index-llms-aibadgr [0.1.0]
  • feat(llama-index-llms-aibadgr): Add AI Badgr OpenAI‑compatible LLM integration (#​20365)
llama-index-llms-anthropic [0.10.4]
llama-index-llms-bedrock-converse [0.12.3]
  • fix: bedrock converse thinking block issue (#​20355)
llama-index-llms-google-genai [0.8.3]
  • Switch use_file_api to Flexible file_mode; Improve File Upload Handling & Bump google-genai to v1.52.0 (#​20347)
  • Fix missing role from Google-GenAI (#​20357)
  • Add signature index fix (#​20362)
  • Add positional thought signature for thoughts (#​20418)
llama-index-llms-ollama [0.9.1]
  • feature: pydantic no longer complains if you pass 'low', 'medium', 'h… (#​20394)
llama-index-llms-openai [0.6.12]
  • fix: Handle tools=None in OpenAIResponses._get_model_kwargs (#​20358)
  • feat: add support for gpt-5.2 and 5.2 pro (#​20361)
llama-index-readers-confluence [0.6.1]
  • fix(confluence): support Python 3.14 (#​20370)
llama-index-readers-file [0.5.6]
  • Loosen constraint on pandas version (#​20387)
llama-index-readers-service-now [0.2.2]
  • chore(deps): bump urllib3 from 2.5.0 to 2.6.0 in /llama-index-integrations/readers/llama-index-readers-service-now in the pip group across 1 directory (#​20341)
llama-index-tools-mcp [0.4.5]
  • fix: pass timeout parameters to transport clients in BasicMCPClient (#​20340)
  • feature: Permit to pass a custom httpx.AsyncClient when creating a BasicMcpClient (#​20368)
llama-index-tools-typecast [0.1.0]
  • feat: add Typecast tool integration with text to speech features (#​20343)
llama-index-vector-stores-azurepostgresql [0.2.0]
llama-index-vector-stores-chroma [0.5.5]
  • Fix chroma nested metadata filters (#​20424)
  • fix(chroma): support multimodal results (#​20426)
llama-index-vector-stores-couchbase [0.6.0]
  • Update FTS & GSI reference docs for Couchbase vector-store (#​20346)
llama-index-vector-stores-faiss [0.5.2]
  • fix(faiss): pass numpy array instead of int to add_with_ids (#​20384)
llama-index-vector-stores-lancedb [0.4.4]
  • Feat/async tool spec support (#​20338)
  • fix(vector_stores/lancedb): add missing '<' filter operator (#​20364)
  • fix(lancedb): fix metadata filtering logic and list value SQL generation (#​20374)
llama-index-vector-stores-mongodb [0.9.0]
  • Update mongo vector store to initialize without list permissions (#​20354)
  • add mongodb delete index (#​20429)
  • async mongodb atlas support (#​20430)
llama-index-vector-stores-redis [0.6.2]
llama-index-vector-stores-vertexaivectorsearch [0.3.3]
  • feat(vertex-vector-search): Add Google Vertex AI Vector Search v2.0 support (#​20351)

v0.14.10

Compare Source

llama-index-core [0.14.10]
  • feat: add mock function calling llm (#​20331)
llama-index-llms-qianfan [0.4.1]
  • test: fix typo 'reponse' to 'response' in variable names (#​20329)
llama-index-tools-airweave [0.1.0]
  • feat: add Airweave tool integration with advanced search features (#​20111)
llama-index-utils-qianfan [0.4.1]
  • test: fix typo 'reponse' to 'response' in variable names (#​20329)

v0.14.9

Compare Source

llama-index-agent-azure [0.2.1]
  • fix: Pin azure-ai-projects version to prevent breaking changes (#​20255)
llama-index-core [0.14.9]
  • MultiModalVectorStoreIndex now returns a multi-modal ContextChatEngine. (#​20265)
  • Ingestion to vector store now ensures that _node-content is readable (#​20266)
  • fix: ensure context is copied with async utils run_async (#​20286)
  • fix(memory): ensure first message in queue is always a user message after flush (#​20310)
llama-index-embeddings-bedrock [0.7.2]
  • feat(embeddings-bedrock): Add support for Amazon Bedrock Application Inference Profiles (#​20267)
  • fix:(embeddings-bedrock) correct extraction of provider from model_name (#​20295)
  • Bump version of bedrock-embedding (#​20304)
llama-index-embeddings-voyageai [0.5.1]
  • VoyageAI correction and documentation (#​20251)
llama-index-llms-anthropic [0.10.3]
llama-index-llms-bedrock-converse [0.12.2]
  • fix(bedrock-converse): Only use guardrail_stream_processing_mode in streaming functions (#​20289)
  • feat: add anthropic opus 4.5 (#​20306)
  • feat(bedrock-converse): Additional support for Claude Opus 4.5 (#​20317)
llama-index-llms-google-genai [0.7.4]
  • Fix gemini-3 support and gemini function call support (#​20315)
llama-index-llms-helicone [0.1.1]
  • update helicone docs + examples (#​20208)
llama-index-llms-openai [0.6.10]
llama-index-llms-ovhcloud [0.1.0]
  • Add OVHcloud AI Endpoints provider (#​20288)
llama-index-llms-siliconflow [0.4.2]
  • [Bugfix] None check on content in delta in siliconflow LLM (#​20327)
llama-index-node-parser-docling [0.4.2]
  • Relax docling Python constraints (#​20322)
llama-index-packs-resume-screener [0.9.3]
  • feat: Update pypdf to latest version (#​20285)
llama-index-postprocessor-voyageai-rerank [0.4.1]
  • VoyageAI correction and documentation (#​20251)
llama-index-protocols-ag-ui [0.2.3]
  • fix: correct order of ag-ui events to avoid event conflicts (#​20296)
llama-index-readers-confluence [0.6.0]
  • Refactor Confluence integration: Update license to MIT, remove requirements.txt, and implement HtmlTextParser for HTML to Markdown conversion. Update dependencies and tests accordingly. (#​20262)
llama-index-readers-docling [0.4.2]
  • Relax docling Python constraints (#​20322)
llama-index-readers-file [0.5.5]
  • feat: Update pypdf to latest version (#​20285)
llama-index-readers-reddit [0.4.1]
  • Fix typo in README.md for Reddit integration (#​20283)
llama-index-storage-chat-store-postgres [0.3.2]
  • [FIX] Postgres ChatStore automatically prefix table name with "data_" (#​20241)
llama-index-vector-stores-azureaisearch [0.4.4]
  • vector-azureaisearch: check if user agent already in policy before add it to azure client (#​20243)
  • fix(azureaisearch): Add close/aclose methods to fix unclosed client session warnings (#​20309)
llama-index-vector-stores-milvus [0.9.4]
  • Fix/consistency level param for milvus (#​20268)
llama-index-vector-stores-postgres [0.7.2]
llama-index-vector-stores-qdrant [0.9.0]
  • fix: Update qdrant-client version constraints (#​20280)
  • Feat: update Qdrant client to 1.16.0 (#​20287)
llama-index-vector-stores-vertexaivectorsearch [0.3.2]
  • fix: update blob path in batch_update_index (#​20281)
llama-index-voice-agents-openai [0.2.2]

v0.14.8

Compare Source

llama-index-core [0.14.8]
  • Fix ReActOutputParser getting stuck when "Answer:" contains "Action:" (#​20098)
  • Add buffer to image, audio, video and document blocks (#​20153)
  • fix(agent): Handle multi-block ChatMessage in ReActAgent (#​20196)
  • Fix/20209 (#​20214)
  • Preserve Exception in ToolOutput (#​20231)
  • fix weird pydantic warning (#​20235)
llama-index-embeddings-nvidia [0.4.2]
  • docs: Edit pass and update example model (#​20198)
llama-index-embeddings-ollama [0.8.4]
  • Added a test case (no code) to check the embedding through an actual connection to a Ollama server (after checking that the ollama server exists) (#​20230)
llama-index-llms-anthropic [0.10.2]
  • feat(llms/anthropic): Add support for RawMessageDeltaEvent in streaming (#​20206)
  • chore: remove unsupported models (#​20211)
llama-index-llms-bedrock-converse [0.11.1]
  • feat: integrate bedrock converse with tool call block (#​20099)
  • feat: Update model name extraction to include 'jp' region prefix and … (#​20233)
llama-index-llms-google-genai [0.7.3]
  • feat: google genai integration with tool block (#​20096)
  • fix: non-streaming gemini tool calling (#​20207)
  • Add token usage information in GoogleGenAI chat additional_kwargs (#​20219)
  • bug fix google genai stream_complete (#​20220)
llama-index-llms-nvidia [0.4.4]
  • docs: Edit pass and code example updates (#​20200)
llama-index-llms-openai [0.6.8]
  • FixV2: Correct DocumentBlock type for OpenAI from 'input_file' to 'file' (#​20203)
  • OpenAI v2 sdk support (#​20234)
llama-index-llms-upstage [0.6.5]
llama-index-packs-streamlit-chatbot [0.5.2]
llama-index-packs-voyage-query-engine [0.5.2]
llama-index-postprocessor-nvidia-rerank [0.5.1]
llama-index-readers-web [0.5.6]
  • feat: Add ScrapyWebReader Integration (#​20212)
  • Update Scrapy dependency to 2.13.3 (#​20228)
llama-index-readers-whisper [0.3.0]
llama-index-storage-kvstore-postgres [0.4.3]
  • fix: Ensure schema creation only occurs if it doesn't already exist (#​20225)
llama-index-tools-brightdata [0.2.1]
  • docs: add api key claim instructions (#​20204)
llama-index-tools-mcp [0.4.3]
  • Added test case for issue 19211. No code change (#​20201)
llama-index-utils-oracleai [0.3.1]
  • Update llama-index-core dependency to 0.12.45 (#​20227)
llama-index-vector-stores-lancedb [0.4.2]
  • fix: FTS index recreation bug on every LanceDB query (#​20213)

v0.14.7

Compare Source

llama-index-core [0.14.7]
  • Feat/serpex tool integration (#​20141)
  • Fix outdated error message about setting LLM (#​20157)
  • Fixing some recently failing tests (#​20165)
  • Fix: update lock to latest workflow and fix issues (#​20173)
  • fix: ensure full docstring is used in FunctionTool (#​20175)
  • fix api docs build (#​20180)
llama-index-embeddings-voyageai [0.5.0]
  • Updating the VoyageAI integration (#​20073)
llama-index-llms-anthropic [0.10.0]
  • feat: integrate anthropic with tool call block (#​20100)
llama-index-llms-bedrock-converse [0.10.7]
  • feat: Add support for Bedrock Guardrails streamProcessingMode (#​20150)
  • bedrock structured output optional force (#​20158)
llama-index-llms-fireworks [0.4.5]
llama-index-llms-mistralai [0.9.0]
  • feat: mistralai integration with tool call block (#​20103)
llama-index-llms-ollama [0.9.0]
  • feat: integrate ollama with tool call block (#​20097)
llama-index-llms-openai [0.6.6]
  • Allow setting temp of gpt-5-chat (#​20156)
llama-index-readers-confluence [0.5.0]
  • feat(confluence): make SVG processing optional to fix pycairo install… (#​20115)
llama-index-readers-github [0.9.0]
  • Add GitHub App authentication support (#​20106)
llama-index-retrievers-bedrock [0.5.1]
  • Fixing some recently failing tests (#​20165)
llama-index-tools-serpex [0.1.0]
llama-index-vector-stores-couchbase [0.6.0]
  • Add Hyperscale and Composite Vector Indexes support for Couchbase vector-store (#​20170)

v0.14.6

Compare Source

llama-index-core [0.14.6]
  • Add allow_parallel_tool_calls for non-streaming (#​20117)
  • Fix invalid use of field-specific metadata (#​20122)
  • update doc for SemanticSplitterNodeParser (#​20125)
  • fix rare cases when sentence splits are larger than chunk size (#​20147)
llama-index-embeddings-bedrock [0.7.0]
  • Fix BedrockEmbedding to support Cohere v4 response format (#​20094)
llama-index-embeddings-isaacus [0.1.0]
  • feat: Isaacus embeddings integration (#​20124)
llama-index-embeddings-oci-genai [0.4.2]
llama-index-llms-anthropic [0.9.7]
  • Fix double token stream in anthropic llm (#​20108)
  • Ensure anthropic content delta only has user facing response (#​20113)
llama-index-llms-baseten [0.1.7]
llama-index-llms-helicone [0.1.0]
  • integrate helicone to llama-index (#​20131)
llama-index-llms-oci-genai [0.6.4]
llama-index-llms-openai [0.6.5]
llama-index-readers-imdb-review [0.4.2]
  • chore: Update selenium dependency in imdb-review reader (#​20105)
llama-index-retrievers-bedrock [0.5.0]
  • feat(bedrock): add async support for AmazonKnowledgeBasesRetriever (#​20114)
llama-index-retrievers-superlinked [0.1.3]
llama-index-storage-kvstore-postgres [0.4.2]
  • fix: Replace raw SQL string interpolation with proper SQLAlchemy parameterized APIs in PostgresKVStore (#​20104)
llama-index-tools-mcp [0.4.3]
  • Fix BasicMCPClient resource signatures (#​20118)
llama-index-vector-stores-postgres [0.7.1]
  • Add GIN index support for text array metadata in PostgreSQL vector store (#​20130)

v0.14.5

Compare Source

llama-index-core [0.14.5]
  • Remove debug print (#​20000)
  • safely initialize RefDocInfo in Docstore (#​20031)
  • Add progress bar for multiprocess loading (#​20048)
  • Fix duplicate node positions when identical text appears multiple times in document (#​20050)
  • chore: tool call block - part 1 (#​20074)
llama-index-instrumentation [0.4.2]
  • update instrumentation package metadata (#​20079)
llama-index-llms-anthropic [0.9.5]
  • ✨ feat(anthropic): add prompt caching model validation utilities (#​20069)
  • fix streaming thinking/tool calling with anthropic (#​20077)
  • Add haiku 4.5 support (#​20092)
llama-index-llms-baseten [0.1.6]
  • Baseten provider Kimi K2 0711, Llama 4 Maverick and Llama 4 Scout Model APIs deprecation (#​20042)
llama-index-llms-bedrock-converse [0.10.5]
  • feat: List Claude Sonnet 4.5 as a reasoning model (#​20022)
  • feat: Support global cross-region inference profile prefix (#​20064)
  • Update utils.py for opus 4.1 (#​20076)
  • 4.1 opus bedrockconverse missing in funcitoncalling models (#​20084)
  • Add haiku 4.5 support (#​20092)
llama-index-llms-fireworks [0.4.4]
  • Add Support for Custom Models in Fireworks LLM (#​20023)
  • fix(llms/fireworks): Cannot use Fireworks Deepseek V3.1-20006 issue (#​20028)
llama-index-llms-oci-genai [0.6.3]
  • Add support for xAI models in OCI GenAI (#​20089)
llama-index-llms-openai [0.6.4]
  • Gpt 5 pro addition (#​20029)
  • fix collecting final response with openai responses streaming (#​20037)
  • Add support for GPT-5 models in utils.py (JSON_SCHEMA_MODELS) (#​20045)
  • chore: tool call block - part 1 (#​20074)
llama-index-llms-sglang [0.1.0]
llama-index-readers-gitlab [0.5.1]
  • feat(gitlab): add pagination params for repository tree and issues (#​20052)
llama-index-readers-json [0.4.2]
llama-index-readers-web [0.5.5]
  • fix: ScrapflyReader Pydantic validation error (#​19999)
llama-index-storage-chat-store-dynamodb [0.4.2]
llama-index-tools-mcp [0.4.2]
  • 🐛 fix(tools/mcp): Fix dict type handling and reference resolution in … (#​20082)
llama-index-tools-signnow [0.1.0]
  • feat(signnow): SignNow mcp tools integration (#​20057)
llama-index-tools-tavily-research [0.4.2]
  • feat: Add Tavily extract function for URL content extraction (#​20038)
llama-index-vector-stores-azurepostgresql [0.2.0]
  • Add hybrid search to Azure PostgreSQL integration (#​20027)
llama-index-vector-stores-milvus [0.9.3]
llama-index-vector-stores-opensearch [0.6.2]
  • fix(opensearch): Correct version check for efficient filtering (#​20067)
llama-index-vector-stores-qdrant [0.8.6]
  • fix(qdrant): Allow async-only initialization with hybrid search (#​20005)

v0.14.4

Compare Source

llama-index-core [0.14.4]
llama-index-embeddings-anyscale [0.4.2]
llama-index-embeddings-baseten [0.1.2]
llama-index-embeddings-fireworks [0.4.2]
llama-index-embeddings-opea [0.2.2]
llama-index-embeddings-text-embeddings-inference [0.4.2]
  • Fix authorization header setup logic in text embeddings inference (#​19979)
llama-index-llms-anthropic [0.9.3]
llama-index-llms-anyscale [0.4.2]
llama-index-llms-azure-openai [0.4.2]
llama-index-llms-baseten [0.1.5]
llama-index-llms-bedrock-converse [0.9.5]
  • feat: Additional support for Claude Sonnet 4.5 (#​19980)
llama-index-llms-deepinfra [0.5.2]
llama-index-llms-everlyai [0.4.2]
llama-index-llms-fireworks [0.4.2]
llama-index-llms-google-genai [0.6.2]
  • Fix for ValueError: ChatMessage contains multiple blocks, use 'ChatMe… (#​19954)
llama-index-llms-keywordsai [1.1.2]
llama-index-llms-localai [0.5.2]
llama-index-llms-mistralai [0.8.2]
llama-index-llms-monsterapi [0.4.2]
llama-index-llms-nvidia [0.4.4]
llama-index-llms-ollama [0.7.4]
  • Fix TypeError: unhashable type: 'dict' in Ollama stream chat with tools (#​19938)
llama-index-llms-openai [0.6.1]
  • feat(OpenAILike): support structured outputs (#​19967)
llama-index-llms-openai-like [0.5.3]
  • feat(OpenAILike): support structured outputs (#​19967)
llama-index-llms-openrouter [0.4.2]
  • chore(openrouter,anthropic): add py.typed (#​19966)
llama-index-llms-perplexity [0.4.2]
llama-index-llms-portkey [0.4.2]
llama-index-llms-sarvam [0.2.1]
llama-index-llms-upstage [0.6.4]
llama-index-llms-yi [0.4.2]
llama-index-memory-bedrock-agentcore [0.1.0]
  • feat: Bedrock AgentCore Memory integration (#​19953)
llama-index-multi-modal-llms-openai [0.6.2]
llama-index-readers-confluence [0.4.4]
  • Fix: Respect cloud parameter when fetching child pages in ConfluenceR… (#​19983)
llama-index-readers-service-now [0.2.2]
  • Bug Fix :- Not Able to Fetch Page whose latest is empty or null (#​19916)
llama-index-selectors-notdiamond [0.4.0]
llama-index-tools-agentql [1.2.0]
llama-index-tools-playwright [0.3.1]
llama-index-tools-scrapegraph [0.2.2]
llama-index-vector-stores-chroma [0.5.3]
llama-index-vector-stores-mongodb [0.8.1]
llama-index-vector-stores-postgres [0.7.0]
  • fix index creation in postgres vector store (#​19955)
llama-index-vector-stores-solr [0.1.0]
  • Add ApacheSolrVectorStore Integration (#​19933)

v0.14.3

Compare Source

llama-index-core [0.14.3]
  • Fix Gemini thought signature serialization (#​19891)
  • Adding a ThinkingBlock among content blocks (#​19919)
llama-index-llms-anthropic [0.9.0]
  • Adding a ThinkingBlock among content blocks (#​19919)
llama-index-llms-baseten [0.1.4]
  • added kimik2 0905 and reordered list for validation (#​19892)
  • Baseten Dynamic Model APIs Validation (#​19893)
llama-index-llms-google-genai [0.6.0]
  • Add missing FileAPI support for documents (#​19897)
  • Adding a ThinkingBlock among content blocks (#​19919)
llama-index-llms-mistralai [0.8.0]
  • Adding a ThinkingBlock among content blocks (#​19919)
llama-index-llms-openai [0.6.0]
  • Adding a ThinkingBlock among content blocks (#​19919)
llama-index-protocols-ag-ui [0.2.2]
  • improve how state snapshotting works in AG-UI (#​19934)
llama-index-readers-mongodb [0.5.0]
  • Use PyMongo Asynchronous API instead of Motor (#​19875)
llama-index-readers-paddle-ocr [0.1.0]
  • [New Package] Add PaddleOCR Reader for extracting text from images in PDFs (#​19827)
llama-index-readers-web [0.5.4]
  • feat(readers/web-firecrawl): migrate to Firecrawl v2 SDK (#​19773)
llama-index-storage-chat-store-mongo [0.3.0]
  • Use PyMongo Asynchronous API instead of Motor (#​19875)
llama-index-storage-kvstore-mongodb [0.5.0]
  • Use PyMongo Asynchronous API instead of Motor (#​19875)
llama-index-tools-valyu [0.5.0]
  • Add Valyu Extractor and Fast mode (#​19915)
llama-index-vector-stores-azureaisearch [0.4.2]
  • Fix/llama index vector stores azureaisearch fix (#​19800)
llama-index-vector-stores-azurepostgresql [0.1.0]
  • Add support for Azure PostgreSQL (#​19709)
llama-index-vector-stores-qdrant [0.8.5]
  • Add proper compat for old sparse vectors (#​19882)
llama-index-vector-stores-singlestoredb [0.4.2]
  • Fix SQLi Vulnerability in SingleStore Db (#​19914)

v0.14.2

Compare Source

llama-index-core [0.14.2]
  • fix: handle data urls in ImageBlock (#​19856)
  • fix: Move IngestionPipeline docstore document insertion after transformations (#​19849)
  • fix: Update IngestionPipeline async document store insertion (#​19868)
  • chore: remove stepwise usage of workflows from code (#​19877)
llama-index-embeddings-fastembed [0.5.0]
  • feat: make fastembed cpu or gpu optional (#​19878)
llama-index-llms-deepseek [0.2.2]
  • feat: pass context_window to super in deepseek llm (#​19876)
llama-index-llms-google-genai [0.5.0]
  • feat: Add GoogleGenAI FileAPI support for large files (#​19853)
llama-index-readers-solr [0.1.0]
  • feat: Add Solr reader integration (#​19843)
llama-index-retrievers-alletra-x10000-retriever [0.1.0]
  • feat: add AlletraX10000Retriever integration (#​19798)
llama-index-vector-stores-oracledb [0.3.2]
  • feat: OraLlamaVS Connection Pool Support + Filtering (#​19412)
llama-index-vector-stores-postgres [0.6.8]
  • feat: Add customize_query_fn to PGVectorStore (#​19847)

v0.14.1

Compare Source

llama-index-core [0.14.10]
  • feat: add mock function calling llm (#​20331)
llama-index-llms-qianfan [0.4.1]
  • test: fix typo 'reponse' to 'response' in variable names (#​20329)
llama-index-tools-airweave [0.1.0]
  • feat: add Airweave tool integration with advanced search features (#​20111)
llama-index-utils-qianfan [0.4.1]
  • test: fix typo 'reponse' to 'response' in variable names (#​20329)

v0.14.0

Compare Source

NOTE: All packages have been bumped to handle the latest llama-index-core version.

llama-index-core [0.14.0]
  • breaking: bumped llama-index-workflows dependency to 2.0
    • Improve stacktraces clarity by avoiding wrapping errors in WorkflowRuntimeError
    • Remove deprecated checkpointer feature
    • Remove deprecated sub-workflows feature
    • Remove deprecated send_event method from Workflow class (still existing on the Context class)
    • Remove deprecated stream_events() methods from Workflow class (still existing on the Context class)
    • Remove deprecated support for stepwise execution
llama-index-llms-openai [0.5.6]
  • feat: add support for document blocks in openai chat completions (#​19809)

v0.13.6

Compare Source

llama-index-core [0.13.6]
  • chore: remove openai selector from core utils function (#​19803)
llama-index-llms-cometapi [0.1.0]
  • feat: Add CometAPI LLM integration (#​19793)

v0.13.5

Compare Source

llama-index-core [0.13.5]
  • feat: add thinking delta field to AgentStream events to expose from LLM responses (#​19785)
  • fix: fix path handling in SimpleDirectoryReader and PDFReader path fix (#​19794)
llama-index-llms-bedrock-converse [0.9.0]
  • feat: add system prompt and tool caching config kwargs to BedrockConverse (#​19737)
llama-index-llms-litellm [0.6.2]
  • fix: Handle missing tool call IDs with UUID fallback (#​19789)
  • fix: Fix critical context window calculation (#​19787)
llama-index-readers-file [0.5.3]
  • fix: fix path handling in SimpleDirectoryReader and PDFReader path fix (#​19794)
llama-index-storage-chat-store-yugabytedb [0.1.0]
  • feat: add Yugabytedb chat store (#​19768)
llama-index-vector-stores-milvus [0.9.1]
  • fix: create TextNode if no '_node_content' set (#​19772)
llama-index-vector-stores-postgres [0.6.5]
  • fix: make postgres regex punctuation handling consistent with plainto_tsquery (#​19781)

v0.13.4

Compare Source

llama-index-core [0.13.4]
  • feat: Add PostgreSQL schema support to Memory and SQLAlchemyChatStore (#​19741)
  • feat: add missing sync wrapper of put_messages in memory (#​19746)
  • feat: add option for an initial tool choice in FunctionAgent (#​19738)
  • fix: Calling ContextChatEngine with a QueryBundle (instead of a string) (#​19714)
llama-index-embeddings-baseten [0.1.0]
llama-index-embeddings-ibm [0.5.0]
  • feat: Support for additional/external urls, make instance_id deprecated (#​19749)
llama-index-llms-baseten [0.1.0]
llama-index-llms-bedrock-converse [0.8.3]
  • feat: add amazon.nova-premier-v1:0 to BEDROCK_MODELS (#​19728)
llama-index-llms-ibm [0.6.0]
  • feat: Support for additional/external urls, make instance_id deprecated (#​19749)
llama-index-postprocessor-ibm [0.3.0]
  • feat: Support for additional/external urls, make instance_id deprecated (#​19749)
llama-index-postprocessor-sbert-rerank [0.4.1]
  • fix: fix SentenceTransformerRerank init device (#​19756)
llama-index-readers-google [0.7.1]
  • feat: raise google drive errors (#​19752)
llama-index-readers-web [0.5.1]
llama-index-vector-stores-chroma [0.5.2]
llama-index-vector-stores-postgres [0.6.4]
  • fix: Use the indexed metadata field 'ref_doc_id' instead of 'doc_id' during deletion (#​19759)
llama-index-vector-stores-qdrant [0.8.2]

feat: Payload indexes support to QdrantVectorStore (#​19743)

v0.13.3

Compare Source

llama-index-core [0.13.3]
  • fix: add timeouts on image .get() requests (#​19723)
  • fix: fix StreamingAgentChatResponse losses message bug (#​19674)
  • fix: Fixing crashing when retrieving from empty vector store index (#​19706)
  • fix: Calling ContextChatEngine with a QueryBundle (instead of a string) (#​19714)
  • fix: Fix faithfulness evaluate crash when no images provided (#​19686)
llama-index-embeddings-heroku [0.1.0]
  • feat: Adds support for HerokuEmbeddings (#​19685)
llama-index-embeddings-ollama [0.8.2]
  • feat: enhance OllamaEmbedding with instruction support (#​19721)
llama-index-llms-anthropic [0.8.5]
  • fix: Fix prompt caching with CachePoint (#​19711)
llama-index-llms-openai [0.5.4]
  • feat: add gpt-5-chat-latest model support (#​19687)
llama-index-llms-sagemaker-endpoint [0.4.1]
  • fix: fix constructor region read to not read region_name before is popped from kwargs, and fix assign to super (#​19705)
llama-index-llms-upstage [0.6.2]
  • chore: remove deprecated model(solar-pro) (#​19704)
llama-index-readers-confluence [0.4.1]
  • fix: Support concurrent use of multiple ConfluenceReader instances (#​19698)
llama-index-vector-stores-chroma [0.5.1]
  • fix: fix get_nodes() with empty node ids (#​19711)
llama-index-vector-stores-qdrant [0.8.1]
llama-index-vector-stores-tencentvectordb [0.4.1]
  • fix: Resolve AttributeError in CollectionParams.filter_fields access (#​19695)

Configuration

📅 Schedule: Branch creation - "every weekend" in timezone US/Eastern, Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot force-pushed the renovate/llama-index-0.x branch 4 times, most recently from d016986 to dd7a897 Compare December 1, 2025 16:12
@renovate renovate bot force-pushed the renovate/llama-index-0.x branch 13 times, most recently from 7269527 to f76622b Compare December 12, 2025 21:19
@renovate renovate bot force-pushed the renovate/llama-index-0.x branch 12 times, most recently from 3f01368 to f67eb95 Compare December 18, 2025 03:56
@renovate renovate bot force-pushed the renovate/llama-index-0.x branch 7 times, most recently from 4a12cf2 to d216ebf Compare February 5, 2026 19:47
@renovate renovate bot force-pushed the renovate/llama-index-0.x branch 20 times, most recently from e3425c1 to 42f4dba Compare February 11, 2026 19:47
@renovate renovate bot force-pushed the renovate/llama-index-0.x branch from 42f4dba to 6e6a427 Compare February 15, 2026 11:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants