From 0b00ed8c688cf15c725d0d297424c05a5f99ddea Mon Sep 17 00:00:00 2001 From: Justin Lin Date: Wed, 4 Feb 2026 15:12:52 -0500 Subject: [PATCH] fix: update model cache example repo URL Update GitHub links from runpod/model-store-cache-example to runpod-workers/model-store-cache-example. --- serverless/endpoints/model-caching.mdx | 2 +- tutorials/serverless/model-caching-text.mdx | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/serverless/endpoints/model-caching.mdx b/serverless/endpoints/model-caching.mdx index 6ca8ef0c..93eb94d7 100644 --- a/serverless/endpoints/model-caching.mdx +++ b/serverless/endpoints/model-caching.mdx @@ -179,7 +179,7 @@ else: The following sample applications demonstrate how you can integrate cached models into your custom workers: -- [Cached models + LLMs](https://github.com/runpod/model-store-cache-example): A custom worker that uses cached models to serve LLMs. +- [Cached models + LLMs](https://github.com/runpod-workers/model-store-cache-example): A custom worker that uses cached models to serve LLMs. ## Current limitations diff --git a/tutorials/serverless/model-caching-text.mdx b/tutorials/serverless/model-caching-text.mdx index caccbbf1..33f3fc8c 100644 --- a/tutorials/serverless/model-caching-text.mdx +++ b/tutorials/serverless/model-caching-text.mdx @@ -6,7 +6,7 @@ tag: "NEW" --- -You can download the finished code for this tutorial [on GitHub](https://github.com/runpod/model-store-cache-example). +You can download the finished code for this tutorial [on GitHub](https://github.com/runpod-workers/model-store-cache-example). This tutorial demonstrates how to build a custom Serverless worker that leverages Runpod's [cached model](/serverless/endpoints/model-caching) feature to serve the Phi-3 language model. You'll learn how to create a handler function that locates and loads cached models in offline mode, which can significantly reduce costs and cold start times.