From 321839530d9daa30cf74af0f03129e391be0a44d Mon Sep 17 00:00:00 2001 From: Mike Hartington Date: Mon, 23 Feb 2026 17:09:52 -0500 Subject: [PATCH 1/3] chore(docs): move to direct tcp for most things --- apps/docs/content/docs/console/index.mdx | 6 +- .../more/comparisons/prisma-and-drizzle.mdx | 8 -- .../client-extensions/extension-examples.mdx | 1 - .../deployment/edge/deploy-to-cloudflare.mdx | 49 +---------- .../postgres/database/connection-pooling.mdx | 67 --------------- .../postgres/database/direct-connections.mdx | 86 ++----------------- 6 files changed, 12 insertions(+), 205 deletions(-) diff --git a/apps/docs/content/docs/console/index.mdx b/apps/docs/content/docs/console/index.mdx index 27975c646c..cca050ba48 100644 --- a/apps/docs/content/docs/console/index.mdx +++ b/apps/docs/content/docs/console/index.mdx @@ -10,7 +10,7 @@ metaDescription: Learn about the Console to integrate the Prisma Data Platform p The [Console](https://console.prisma.io/login) enables you to manage and configure your projects that use Prisma products, and helps you integrate them into your application: -- [Accelerate](/accelerate): Speeds up your queries with a global database cache with scalable connection pooling. + - [Optimize](/optimize): Provides you recommendations that can help you make your database queries faster. - [Prisma Postgres](/postgres): A managed PostgreSQL database that is optimized for Prisma ORM. @@ -32,7 +32,7 @@ The Console is organized around four main concepts: - **[User account](/console/concepts#user-account)**: Your personal account to manage workspaces and projects - **[Workspaces](/console/concepts#workspace)**: Team-level container where billing is managed - **[Projects](/console/concepts#project)**: Application-level container within a workspace -- **[Resources](/console/concepts#resources)**: Actual services or databases within a project (databases for Prisma Postgres, environments for Accelerate) +- **[Resources](/console/concepts#resources)**: Actual services or databases within a project (databases for Prisma Postgres) Read more about [Console concepts](/console/concepts). @@ -44,6 +44,6 @@ Learn more about the [Console CLI commands](/cli/console). ## API keys -An API key is required to authenticate requests from your Prisma Client to products such as Prisma Accelerate and Prisma Optimize. API keys are generated and managed at the resource level. +An API key is required to authenticate requests from your Prisma Client to products. API keys are generated and managed at the resource level. Learn more about [API keys](/console/features/api-keys). diff --git a/apps/docs/content/docs/orm/more/comparisons/prisma-and-drizzle.mdx b/apps/docs/content/docs/orm/more/comparisons/prisma-and-drizzle.mdx index 42d4a50df2..a925defedf 100644 --- a/apps/docs/content/docs/orm/more/comparisons/prisma-and-drizzle.mdx +++ b/apps/docs/content/docs/orm/more/comparisons/prisma-and-drizzle.mdx @@ -285,14 +285,6 @@ const posts = await db.select().from(posts).where(ilike(posts.title, "%Hello Wor Both Drizzle and Prisma ORM have the ability to log queries and the underlying SQL generated. -## Additional products - -Both Drizzle and Prisma offer products alongside an ORM. Prisma Studio was released to allow users to interact with their database via a GUI and also allows for limited self-hosting for use within a team. Drizzle Studio was released to accomplish the same tasks. - -In addition to Prisma Studio, Prisma offers commercial products via the Prisma Data Platform: - -- [Prisma Accelerate](https://www.prisma.io/accelerate?utm_source=docs&utm_medium=orm-docs): A connection pooler and global cache that integrates with Prisma ORM. Users can take advantage of connection pooling immediately and can control caching at an individual query level. -- [Prisma Optimize](https://www.prisma.io/optimize?utm_source=docs&utm_medium=orm-docs): A query analytics tool that provides deep insights, actionable recommendations, and allows you to interact with Prisma AI for further insights and optimizing your database queries. These products work hand-in-hand with Prisma ORM to offer comprehensive data tooling, making building data-driven applications easy by following [Data DX](https://www.datadx.io/) principles. diff --git a/apps/docs/content/docs/orm/prisma-client/client-extensions/extension-examples.mdx b/apps/docs/content/docs/orm/prisma-client/client-extensions/extension-examples.mdx index 266064f474..0902e5a751 100644 --- a/apps/docs/content/docs/orm/prisma-client/client-extensions/extension-examples.mdx +++ b/apps/docs/content/docs/orm/prisma-client/client-extensions/extension-examples.mdx @@ -12,7 +12,6 @@ The following is a list of extensions we've built at Prisma: | Extension | Description | | :------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------- | -| [`@prisma/extension-accelerate`](https://www.npmjs.com/package/@prisma/extension-accelerate) | Enables [Accelerate](https://www.prisma.io/accelerate), a global database cache available in 300+ locations with built-in connection pooling | | [`@prisma/extension-read-replicas`](https://github.com/prisma/extension-read-replicas) | Adds read replica support to Prisma Client | ## Extensions made by Prisma's community diff --git a/apps/docs/content/docs/orm/prisma-client/deployment/edge/deploy-to-cloudflare.mdx b/apps/docs/content/docs/orm/prisma-client/deployment/edge/deploy-to-cloudflare.mdx index d96735ac8c..d6d5dc718a 100644 --- a/apps/docs/content/docs/orm/prisma-client/deployment/edge/deploy-to-cloudflare.mdx +++ b/apps/docs/content/docs/orm/prisma-client/deployment/edge/deploy-to-cloudflare.mdx @@ -39,49 +39,9 @@ This command: - Connects your CLI to your [Prisma Data Platform](https://console.prisma.io) account. If you're not logged in or don't have an account, your browser will open to guide you through creating a new account or signing into your existing one. - Creates a `prisma` directory containing a `schema.prisma` file for your database models. -- Creates a `.env` file with your `DATABASE_URL` (e.g., for Prisma Postgres it should have something similar to `DATABASE_URL="prisma+postgres://accelerate.prisma-data.net/?api_key=eyJhbGciOiJIUzI..."`). +- Creates a `.env` file with your `DATABASE_URL`. -You'll need to install the Client extension required to use Prisma Postgres: -```npm -npm i @prisma/extension-accelerate -``` - -And extend `PrismaClient` with the extension in your application code: - -```typescript -import { PrismaClient } from "./generated/client"; -import { withAccelerate } from "@prisma/extension-accelerate"; - -export interface Env { - DATABASE_URL: string; -} - -export default { - async fetch(request, env, ctx) { - const prisma = new PrismaClient({ - datasourceUrl: env.DATABASE_URL, - }).$extends(withAccelerate()); - - const users = await prisma.user.findMany(); - const result = JSON.stringify(users); - ctx.waitUntil(prisma.$disconnect()); - return new Response(result); - }, -} satisfies ExportedHandler; -``` - -:::note -Call `ctx.waitUntil(prisma.$disconnect())` before returning so the Worker releases the database connection when the response is done. Otherwise the Worker may not disconnect in time and can run out of memory. -::: - -Then setup helper scripts to perform migrations and generate `PrismaClient` as [shown in this section](/orm/prisma-client/deployment/edge/deploy-to-cloudflare#development). - -:::note - -You need to have the `dotenv-cli` package installed as Cloudflare Workers does not support `.env` files. You can do this by running the following command to install the package locally in your project: `npm install -D dotenv-cli`. - -::: ### Using an edge-compatible driver @@ -97,11 +57,8 @@ The edge-compatible drivers for Cloudflare Workers and Pages are: There's [also work being done](https://github.com/sidorares/node-mysql2/pull/2289) on the `node-mysql2` driver which will enable access to traditional MySQL databases from Cloudflare Workers and Pages in the future as well. -:::note - -If your application uses PostgreSQL, we recommend using [Prisma Postgres](/postgres). It is fully supported on edge runtimes and does not require a specialized edge-compatible driver. For other databases, [Prisma Accelerate](/accelerate) extends edge compatibility so you can connect to _any_ database from _any_ edge function provider. +If your application uses PostgreSQL, we recommend using [Prisma Postgres](/postgres). It is fully supported on edge runtimes and does not require a specialized edge-compatible driver. -::: ### Setting your database connection URL as an environment variable @@ -181,7 +138,7 @@ This command requires you to be authenticated, and will ask you to log in to you ### Size limits on free accounts -Cloudflare has a [size limit of 3 MB for Workers on the free plan](https://developers.cloudflare.com/workers/platform/limits/). If your application bundle with Prisma ORM exceeds that size, we recommend upgrading to a paid Worker plan or using Prisma Accelerate to deploy your application. +Cloudflare has a [size limit of 3 MB for Workers on the free plan](https://developers.cloudflare.com/workers/platform/limits/). If your application bundle with Prisma ORM exceeds that size, we recommend upgrading to a paid Worker plan. ### Deploying a Next.js app to Cloudflare Pages with `@cloudflare/next-on-pages` diff --git a/apps/docs/content/docs/postgres/database/connection-pooling.mdx b/apps/docs/content/docs/postgres/database/connection-pooling.mdx index 6649ee1705..75f06ca0bf 100644 --- a/apps/docs/content/docs/postgres/database/connection-pooling.mdx +++ b/apps/docs/content/docs/postgres/database/connection-pooling.mdx @@ -53,70 +53,3 @@ With TCP connections, there are no limits on query duration, transaction duratio For most production applications, pooled connections are recommended. Use direct connections when you need a persistent connection or are working in a low-concurrency environment like local development. -## Connection pooling with Accelerate - -You can also connect to your Prisma Postgres database through [Prisma Accelerate](/accelerate), which provides built-in connection pooling along with a global [caching layer](/postgres/database/caching). - -Accelerate uses a proxy-based approach and requires Prisma ORM with the Accelerate client extension: - -```ts -import { PrismaClient } from "../generated/prisma/client"; -import { withAccelerate } from "@prisma/extension-accelerate"; - -const prisma = new PrismaClient({ - accelerateUrl: process.env.DATABASE_URL, -}).$extends(withAccelerate()); -``` - -Your Accelerate connection string uses the `prisma+postgres://` protocol: - -```bash -DATABASE_URL="prisma+postgres://accelerate.prisma-data.net/?api_key=API_KEY" -``` - -### Configurable limits - -When connected via Accelerate, connection pool size, query duration, transaction duration, and response size have default limits that you can adjust from the **Settings** tab in your Prisma Postgres project in the [Prisma Console](https://console.prisma.io). - -#### Connection pool size - -| | Free | Starter | Pro | Business | -| ------------------------- | ---- | ------- | --- | -------- | -| **Connection limit** | 10 | 100 | 500 | 1000 | - -#### Query timeout - -| | Free | Starter | Pro | Business | -| ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | -| **Query timeout** | Up to 10 seconds | Up to 10 seconds | Up to 20 seconds | Up to 60 seconds | - -:::warning -If your queries regularly take longer than 10 seconds, consider optimizing them. Long-running queries can indicate missing indexes or inefficient data access patterns. See the [error reference](/postgres/error-reference#p6009-responsesizelimitexceeded) for more details. -::: - -#### Interactive transaction timeout - -| | Free | Starter | Pro | Business | -| ---------------------------- | ------------------ | ------------------ | ------------------ | ------------------ | -| **Transaction timeout** | Up to 15 seconds | Up to 15 seconds | Up to 30 seconds | Up to 90 seconds | - -When you increase the transaction timeout in the Prisma Console, you must also set a matching `timeout` in your application code: - -```ts -await prisma.$transaction( - async (tx) => { - // Your queries here - }, - { - timeout: 30000, // 30s — must match your Console setting - }, -); -``` - -#### Response size - -| | Free | Starter | Pro | Business | -| -------------- | ----------- | ----------- | ------------ | ------------ | -| **Response size** | Up to 5 MB | Up to 5 MB | Up to 10 MB | Up to 20 MB | - -See the [error reference](/postgres/error-reference#p6009-responsesizelimitexceeded) and [pricing page](https://www.prisma.io/pricing) for more information. diff --git a/apps/docs/content/docs/postgres/database/direct-connections.mdx b/apps/docs/content/docs/postgres/database/direct-connections.mdx index 158fdc85bc..0c7df7a854 100644 --- a/apps/docs/content/docs/postgres/database/direct-connections.mdx +++ b/apps/docs/content/docs/postgres/database/direct-connections.mdx @@ -8,7 +8,7 @@ metaDescription: Learn about connecting directly to your Prisma Postgres databas ## Overview -Prisma Postgres is the perfect choice for your applications, whether you connect to it via [Prisma ORM](/orm) or any other ORM, database library / tool of your choice. If you use it with Prisma ORM, Prisma Postgres comes with built-in connection pooling, and an integrated caching layer (powered by [Prisma Accelerate](/accelerate)). +Prisma Postgres is the perfect choice for your applications, whether you connect to it via [Prisma ORM](/orm) or any other ORM, database library / tool of your choice. If you use it with Prisma ORM, Prisma Postgres comes with built-in connection pooling. If you connect to it via another tool, you can do so with a [direct connection string](#connection-string) following the conventional PostgreSQL format. @@ -80,84 +80,10 @@ The TCP tunnel feature has been **deprecated** in favor of [direct connections]( ::: -Prisma Postgres can be accessed securely via a TCP tunnel using the [`@prisma/ppg-tunnel`](https://www.npmjs.com/package/@prisma/ppg-tunnel) package, an authentication proxy designed for local database workflows. This package establishes a secure connection to Prisma Postgres through a local TCP server, enabling secure access while automatically handling traffic routing and authentication. +Use your direct TCP connection string with your preferred PostgreSQL client or tooling. Common options include: -:::note +- [`psql`](https://www.postgresql.org/docs/current/app-psql.html), the PostgreSQL command-line client. +- [Prisma Studio](/orm/tools/prisma-studio) for browsing and editing application data. +- GUI database editors such as [TablePlus](https://tableplus.com/), [DataGrip](https://www.jetbrains.com/datagrip/), [DBeaver](https://dbeaver.io/), and [Postico](https://eggerapps.at/postico2/). -This is a [Early Access](/console/more/feature-maturity#early-access) feature of Prisma Postgres. It is not recommended for production use and is not intended for application-level access. - -While in Early Access, usage of the TCP tunnel will be free of charge. - -::: - -### Prerequisites - -- Node.js installed on your machine -- A [Prisma Postgres](/postgres) database connection string set as an environment variable called `DATABASE_URL` - -### Exporting environment variables - -The tunnel expects you to have the following `DATABASE_URL` environment variable set to the connection URL of your Prisma Postgres instance. If you are running the tunnel command from your project where an `.env` file has `DATABASE_URL` already set, you can skip this step as the tunnel will automatically pick it up. - -To export the `DATABASE_URL` environment variable temporarily in a terminal session: - -```bash tab="macOS" -export DATABASE_URL="prisma+postgres://accelerate.prisma-data.net/?api_key=API_KEY" -``` - -```bash tab="Linux" -export DATABASE_URL="prisma+postgres://accelerate.prisma-data.net/?api_key=API_KEY" -``` - -```bash tab="Windows" -set "DATABASE_URL=prisma+postgres://accelerate.prisma-data.net/?api_key=API_KEY" -``` - -Replace the `API_KEY` placeholder with the API key value of your Prisma Postgres instance. - -### Starting the TCP tunnel - -To start the proxy server, run the following command: - -```npm -npx @prisma/ppg-tunnel -``` - -```text no-copy wrap -Prisma Postgres auth proxy listening on 127.0.0.1:52604 🚀 - -Your connection is authenticated using your Prisma Postgres API key. -... - -============================== -hostname: 127.0.0.1 -port: 52604 -username: -password: -============================== -``` - -This will start the tunnel on a randomly assigned TCP port. The proxy automatically handles authentication, so any database credentials are accepted. The tunnel also encrypts traffic, meaning clients should be set to not require SSL. - -You can now connect to your Prisma Postgres editor using your favorite PostgreSQL client, e.g. `psql` or a GUI like [TablePlus](/guides/postgres/viewing-data#2a-connect-to-prisma-postgres-using-tableplus) or [DataGrip](/guides/postgres/viewing-data#2b-connect-to-prisma-postgres-using-datagrip). To do so, you only need to provide the **`host`** and **`port`** from the output above. The TCP tunnel will handle authentication via the API key in your Prisma Postgres connection URL, so you can omit the values for **`username`** and **`password`.** - -### Customizing host and port - -By default, the tunnel listens on `127.0.0.1` and assigns a random port. Since it provides access to your Prisma Postgres database, it should only be exposed within a trusted network. You can specify a custom host and port using the `--host` and `--port` flags: - -```npm -npx @prisma/ppg-tunnel --host 127.0.0.1 --port 5432 -``` - -### Next steps - -The local tunnel enables you to access Prisma Postgres from 3rd party database editors such as Postico, DataGrip, TablePlus and pgAdmin. Learn more in this [section](/guides/postgres/viewing-data). - -### Security considerations - -When using the TCP tunnel, keep the following in mind: - -- The tunnel does not support schema management (i.e., DDL queries outside of Prisma Migrate). -- The tunnel should not be exposed to untrusted networks. -- Always store API keys securely and avoid hardcoding them. -- Ensure that only necessary users have direct access to the Prisma Postgres database. +For step-by-step examples of connecting with database editors, see [Viewing data in Prisma Postgres](/guides/postgres/viewing-data). From e741ee3f6ec4c52e7a5953d6acb63fe0d46d45b6 Mon Sep 17 00:00:00 2001 From: Mike Hartington Date: Mon, 23 Feb 2026 20:08:01 -0500 Subject: [PATCH 2/3] Apply suggestions from code review Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> --- apps/docs/content/docs/console/index.mdx | 4 ++-- .../prisma-client/deployment/edge/deploy-to-cloudflare.mdx | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/apps/docs/content/docs/console/index.mdx b/apps/docs/content/docs/console/index.mdx index cca050ba48..2e0f7ebfc8 100644 --- a/apps/docs/content/docs/console/index.mdx +++ b/apps/docs/content/docs/console/index.mdx @@ -11,7 +11,7 @@ metaDescription: Learn about the Console to integrate the Prisma Data Platform p The [Console](https://console.prisma.io/login) enables you to manage and configure your projects that use Prisma products, and helps you integrate them into your application: -- [Optimize](/optimize): Provides you recommendations that can help you make your database queries faster. +- [Optimize](/optimize): Provides you with recommendations that can help you make your database queries faster. - [Prisma Postgres](/postgres): A managed PostgreSQL database that is optimized for Prisma ORM. ## Getting started @@ -44,6 +44,6 @@ Learn more about the [Console CLI commands](/cli/console). ## API keys -An API key is required to authenticate requests from your Prisma Client to products. API keys are generated and managed at the resource level. +An API key is required to authenticate Prisma Client requests to Prisma Data Platform resources. API keys are generated and managed at the resource level. Learn more about [API keys](/console/features/api-keys). diff --git a/apps/docs/content/docs/orm/prisma-client/deployment/edge/deploy-to-cloudflare.mdx b/apps/docs/content/docs/orm/prisma-client/deployment/edge/deploy-to-cloudflare.mdx index d6d5dc718a..2c91d0bb1b 100644 --- a/apps/docs/content/docs/orm/prisma-client/deployment/edge/deploy-to-cloudflare.mdx +++ b/apps/docs/content/docs/orm/prisma-client/deployment/edge/deploy-to-cloudflare.mdx @@ -57,7 +57,7 @@ The edge-compatible drivers for Cloudflare Workers and Pages are: There's [also work being done](https://github.com/sidorares/node-mysql2/pull/2289) on the `node-mysql2` driver which will enable access to traditional MySQL databases from Cloudflare Workers and Pages in the future as well. -If your application uses PostgreSQL, we recommend using [Prisma Postgres](/postgres). It is fully supported on edge runtimes and does not require a specialized edge-compatible driver. +If your application uses PostgreSQL, we recommend using [Prisma Postgres](/postgres). It is fully supported on edge runtimes and does not require a specialized edge-compatible driver. Review the [Prisma Postgres limitations](/postgres/database/limitations) to understand current constraints. ### Setting your database connection URL as an environment variable From b1b9f03c38f8e95a301273337646282a494a6797 Mon Sep 17 00:00:00 2001 From: Mike Hartington Date: Mon, 23 Feb 2026 20:31:04 -0500 Subject: [PATCH 3/3] chore(docs): clean up local links --- .../docs/accelerate/more/troubleshoot.mdx | 24 +++++++++---------- .../docs/orm/reference/error-reference.mdx | 6 ++--- .../postgres/database/direct-connections.mdx | 2 +- .../content/docs/postgres/error-reference.mdx | 24 +++++++++---------- 4 files changed, 28 insertions(+), 28 deletions(-) diff --git a/apps/docs/content/docs/accelerate/more/troubleshoot.mdx b/apps/docs/content/docs/accelerate/more/troubleshoot.mdx index 9557f2ed90..d7b25e4ed9 100644 --- a/apps/docs/content/docs/accelerate/more/troubleshoot.mdx +++ b/apps/docs/content/docs/accelerate/more/troubleshoot.mdx @@ -10,7 +10,7 @@ When working with Accelerate, you may encounter errors often highlighted by spec ## `P6009` (`ResponseSizeLimitExceeded`) -This error is triggered when the response size from a database query exceeds [the configured query response size limit](/postgres/database/connection-pooling#response-size). We've implemented this restriction to safeguard your application performance, as retrieving data over 5MB can significantly slow down your application due to multiple network layers. Typically, transmitting more than 5MB of data is common when conducting ETL (Extract, Transform, Load) operations. However, for other scenarios such as transactional queries, real-time data fetching for user interfaces, bulk data updates, or aggregating large datasets for analytics outside of ETL contexts, it should generally be avoided. These use cases, while essential, can often be optimized to work within [the configured query response size limit](/postgres/database/connection-pooling#response-size), ensuring smoother performance and a better user experience. +This error is triggered when the response size from a database query exceeds the configured query response size limit. We've implemented this restriction to safeguard your application performance, as retrieving data over 5MB can significantly slow down your application due to multiple network layers. Typically, transmitting more than 5MB of data is common when conducting ETL (Extract, Transform, Load) operations. However, for other scenarios such as transactional queries, real-time data fetching for user interfaces, bulk data updates, or aggregating large datasets for analytics outside of ETL contexts, it should generally be avoided. These use cases, while essential, can often be optimized to work within the configured query response size limit, ensuring smoother performance and a better user experience. ### Possible causes for [`P6009`](/orm/reference/error-reference#p6009-responsesizelimitexceeded) @@ -18,25 +18,25 @@ This error is triggered when the response size from a database query exceeds [th This error may arise if images or files stored within your table are being fetched, resulting in a large response size. Storing assets directly in the database is generally discouraged because it significantly impacts database performance and scalability. In addition to performance, it makes database backups slow and significantly increases the cost of storing routine backups. -**Suggested solution:** Configure the [query response size limit](/postgres/database/connection-pooling#response-size) to be larger. If the limit is still exceeded, consider storing the image or file in a BLOB store like [Cloudflare R2](https://developers.cloudflare.com/r2/), [AWS S3](https://aws.amazon.com/pm/serv-s3/), or [Cloudinary](https://cloudinary.com/). These services allow you to store assets optimally and return a URL for access. Instead of storing the asset directly in the database, store the URL, which will substantially reduce the response size. +**Suggested solution:** Configure the query response size limit to be larger. If the limit is still exceeded, consider storing the image or file in a BLOB store like [Cloudflare R2](https://developers.cloudflare.com/r2/), [AWS S3](https://aws.amazon.com/pm/serv-s3/), or [Cloudinary](https://cloudinary.com/). These services allow you to store assets optimally and return a URL for access. Instead of storing the asset directly in the database, store the URL, which will substantially reduce the response size. #### Over-fetching of data -In certain cases, a large number of records or fields are unintentionally fetched, which results in exceeding [the configured query response size limit](/postgres/database/connection-pooling#response-size). This could happen when the [`where`](/orm/reference/prisma-client-reference#where) clause in the query is incorrect or entirely missing. +In certain cases, a large number of records or fields are unintentionally fetched, which results in exceeding the configured query response size limit. This could happen when the [`where`](/orm/reference/prisma-client-reference#where) clause in the query is incorrect or entirely missing. -**Suggested solution:** Configure the [query response size limit](/postgres/database/connection-pooling#response-size) to be larger. If the limit is still exceeded, double-check that the `where` clause is filtering data as expected. To prevent fetching too many records, consider using [pagination](/v6/orm/prisma-client/queries/pagination). Additionally, use the [`select`](/orm/reference/prisma-client-reference#select) clause to return only the necessary fields, reducing the response size. +**Suggested solution:** Configure the query response size limit to be larger. If the limit is still exceeded, double-check that the `where` clause is filtering data as expected. To prevent fetching too many records, consider using [pagination](/v6/orm/prisma-client/queries/pagination). Additionally, use the [`select`](/orm/reference/prisma-client-reference#select) clause to return only the necessary fields, reducing the response size. #### Fetching a large volume of data In many data processing workflows, especially those involving ETL (Extract-Transform-Load) processes or scheduled CRON jobs, there's a need to extract large amounts of data from data sources (like databases, APIs, or file systems) for analysis, reporting, or further processing. If you are running an ETL/CRON workload that fetches a huge chunk of data for analytical processing then you might run into this limit. -**Suggested solution:** Configure the [query response size limit](/postgres/database/connection-pooling#response-size) to be larger. If the limit is exceeded, consider splitting your query into batches. This approach ensures that each batch fetches only a portion of the data, preventing you from exceeding the size limit for a single operation. +**Suggested solution:** Configure the query response size limit to be larger. If the limit is exceeded, consider splitting your query into batches. This approach ensures that each batch fetches only a portion of the data, preventing you from exceeding the size limit for a single operation. ## `P6004` (`QueryTimeout`) -This error occurs when a database query fails to return a response within [the configured query timeout limit](/postgres/database/connection-pooling#query-timeout). The query timeout limit includes the duration of waiting for a connection from the pool, network latency to the database, and the execution time of the query itself. We enforce this limit to prevent unintentional long-running queries that can overload system resources. +This error occurs when a database query fails to return a response within the configured query timeout limit. The query timeout limit includes the duration of waiting for a connection from the pool, network latency to the database, and the execution time of the query itself. We enforce this limit to prevent unintentional long-running queries that can overload system resources. -> The time for Accelerate's cross-region networking is excluded from [the configured query timeout limit](/postgres/database/connection-pooling#query-timeout) limit. +> The time for Accelerate's cross-region networking is excluded from the configured query timeout limit. ### Possible causes for [`P6004`](/orm/reference/error-reference#p6004-querytimeout) @@ -44,17 +44,17 @@ This error could be caused by numerous reasons. Some of the prominent ones are: #### High traffic and insufficient connections -If the application is receiving very high traffic and there are not a sufficient number of connections available to the database, then the queries would need to wait for a connection to become available. This situation can lead to queries waiting longer than [the configured query timeout limit](/postgres/database/connection-pooling#query-timeout) for a connection, ultimately triggering a timeout error if they do not get serviced within this duration. +If the application is receiving very high traffic and there are not a sufficient number of connections available to the database, then the queries would need to wait for a connection to become available. This situation can lead to queries waiting longer than the configured query timeout limit for a connection, ultimately triggering a timeout error if they do not get serviced within this duration. -**Suggested solution**: Review and possibly increase the `connection_limit` specified in the connection string parameter when setting up Accelerate in a platform environment ([reference](/postgres/database/connection-pooling#connection-pool-size)). This limit should align with your database's maximum number of connections. +**Suggested solution**: Review and possibly increase the `connection_limit` specified in the connection string parameter when setting up Accelerate in a platform environment. This limit should align with your database's maximum number of connections. By default, the connection limit is set to 10 unless a different `connection_limit` is specified in your database connection string. #### Long-running queries -Queries may be slow to respond, hitting [the configured query timeout limit](/postgres/database/connection-pooling#query-timeout) even when connections are available. This could happen if a very large amount of data is being fetched in a single query or if appropriate indexes are missing from the table. +Queries may be slow to respond, hitting the configured query timeout limit even when connections are available. This could happen if a very large amount of data is being fetched in a single query or if appropriate indexes are missing from the table. -**Suggested solution**: Configure the [query timeout limit](/postgres/database/connection-pooling#query-timeout) to be larger. If the limit is exceeded, identify the slow-running queries and fetch only the necessary data. Use the `select` clause to retrieve specific fields and avoid fetching unnecessary data. Additionally, consider adding appropriate indexes to improve query efficiency. You might also isolate long-running queries into separate environments to prevent them from affecting transactional queries. +**Suggested solution**: Configure the query timeout limit to be larger. If the limit is exceeded, identify the slow-running queries and fetch only the necessary data. Use the `select` clause to retrieve specific fields and avoid fetching unnecessary data. Additionally, consider adding appropriate indexes to improve query efficiency. You might also isolate long-running queries into separate environments to prevent them from affecting transactional queries. #### Database resource contention @@ -79,7 +79,7 @@ Additionally, direct connections could have a significant impact on your databas If your application's runtime environment supports Prisma ORM natively and you're considering this strategy to circumvent P6009 and P6004 errors, you might create two `PrismaClient` instances: 1. An instance using the Accelerate connection string (prefixed with `prisma://`) for general operations. -2. Another instance with the direct database connection string (e.g., prefixed with `postgres://`, `mysql://`, etc.) for specific operations anticipated to exceed [the configured query limit timeout](/postgres/database/connection-pooling#query-timeout) or to result in responses larger than [the configured query response size limit](/postgres/database/connection-pooling#response-size). +2. Another instance with the direct database connection string (e.g., prefixed with `postgres://`, `mysql://`, etc.) for specific operations anticipated to exceed the configured query timeout limit or to result in responses larger than the configured query response size limit. ```ts export const prisma = new PrismaClient({ diff --git a/apps/docs/content/docs/orm/reference/error-reference.mdx b/apps/docs/content/docs/orm/reference/error-reference.mdx index 3e6e267248..6addd0391e 100644 --- a/apps/docs/content/docs/orm/reference/error-reference.mdx +++ b/apps/docs/content/docs/orm/reference/error-reference.mdx @@ -480,13 +480,13 @@ The included usage of the current plan has been exceeded. This can only occur on #### `P6004` (`QueryTimeout`) -The global timeout of Accelerate has been exceeded. You can find the limit [here](/postgres/database/connection-pooling#query-timeout). +The global timeout of Accelerate has been exceeded. > Also see the [troubleshooting guide](/accelerate/more/troubleshoot#p6004-querytimeout) for more information. #### `P6005` (`InvalidParameters`) -The user supplied invalid parameters. Currently only relevant for transaction methods. For example, setting a timeout that is too high. You can find the limit [here](/postgres/database/connection-pooling#interactive-transaction-timeout). +The user supplied invalid parameters. Currently only relevant for transaction methods. For example, setting a timeout that is too high. #### `P6006` (`VersionNotSupported`) @@ -500,7 +500,7 @@ The engine failed to start. For example, it couldn't establish a connection to t #### `P6009` (`ResponseSizeLimitExceeded`) -The global response size limit of Accelerate has been exceeded. You can find the limit [here](/postgres/database/connection-pooling#response-size). +The global response size limit of Accelerate has been exceeded. > Also see the [troubleshooting guide](/accelerate/more/troubleshoot#p6009-responsesizelimitexceeded) for more information. diff --git a/apps/docs/content/docs/postgres/database/direct-connections.mdx b/apps/docs/content/docs/postgres/database/direct-connections.mdx index 0c7df7a854..97d1122fe9 100644 --- a/apps/docs/content/docs/postgres/database/direct-connections.mdx +++ b/apps/docs/content/docs/postgres/database/direct-connections.mdx @@ -83,7 +83,7 @@ The TCP tunnel feature has been **deprecated** in favor of [direct connections]( Use your direct TCP connection string with your preferred PostgreSQL client or tooling. Common options include: - [`psql`](https://www.postgresql.org/docs/current/app-psql.html), the PostgreSQL command-line client. -- [Prisma Studio](/orm/tools/prisma-studio) for browsing and editing application data. +- [Prisma Studio](https://www.prisma.io/studio) for browsing and editing application data. - GUI database editors such as [TablePlus](https://tableplus.com/), [DataGrip](https://www.jetbrains.com/datagrip/), [DBeaver](https://dbeaver.io/), and [Postico](https://eggerapps.at/postico2/). For step-by-step examples of connecting with database editors, see [Viewing data in Prisma Postgres](/guides/postgres/viewing-data). diff --git a/apps/docs/content/docs/postgres/error-reference.mdx b/apps/docs/content/docs/postgres/error-reference.mdx index 8d5bbc4a05..53073d6bdd 100644 --- a/apps/docs/content/docs/postgres/error-reference.mdx +++ b/apps/docs/content/docs/postgres/error-reference.mdx @@ -12,9 +12,9 @@ It is important to understand the meaning of these errors, why they occur, and h ## `P6009` (`ResponseSizeLimitExceeded`) -This error is triggered when the response size from a database query exceeds [the configured query response size limit](/postgres/database/connection-pooling#response-size). We've implemented this restriction to safeguard your application performance, as retrieving data over `5MB` can significantly slow down your application due to multiple network layers. +This error is triggered when the response size from a database query exceeds the configured query response size limit. We've implemented this restriction to safeguard your application performance, as retrieving data over `5MB` can significantly slow down your application due to multiple network layers. -Typically, transmitting more than `5MB` of data is common when conducting ETL (Extract, Transform, Load) operations. However, for other scenarios such as transactional queries, real-time data fetching for user interfaces, bulk data updates, or aggregating large datasets for analytics outside of ETL contexts, it should generally be avoided. These use cases, while essential, can often be optimized to work within [the configured query response size limit](/postgres/database/connection-pooling#response-size), ensuring smoother performance and a better user experience. +Typically, transmitting more than `5MB` of data is common when conducting ETL (Extract, Transform, Load) operations. However, for other scenarios such as transactional queries, real-time data fetching for user interfaces, bulk data updates, or aggregating large datasets for analytics outside of ETL contexts, it should generally be avoided. These use cases, while essential, can often be optimized to work within the configured query response size limit, ensuring smoother performance and a better user experience. ### Possible causes for [`P6009`](/orm/reference/error-reference#p6009-responsesizelimitexceeded) @@ -22,27 +22,27 @@ Typically, transmitting more than `5MB` of data is common when conducting ETL (E This error may arise if images or files stored within your table are being fetched, resulting in a large response size. Storing assets directly in the database is generally discouraged because it significantly impacts database performance and scalability. In addition to performance, it makes database backups slow and significantly increases the cost of storing routine backups. -**Suggested solution:** Configure the [query response size limit](/postgres/database/connection-pooling#response-size) to be larger. If the limit is still exceeded, consider storing the image or file in a BLOB store like [Cloudflare R2](https://developers.cloudflare.com/r2/), [AWS S3](https://aws.amazon.com/pm/serv-s3/), or [Cloudinary](https://cloudinary.com/). These services allow you to store assets optimally and return a URL for access. Instead of storing the asset directly in the database, store the URL, which will substantially reduce the response size. +**Suggested solution:** Configure the query response size limit to be larger. If the limit is still exceeded, consider storing the image or file in a BLOB store like [Cloudflare R2](https://developers.cloudflare.com/r2/), [AWS S3](https://aws.amazon.com/pm/serv-s3/), or [Cloudinary](https://cloudinary.com/). These services allow you to store assets optimally and return a URL for access. Instead of storing the asset directly in the database, store the URL, which will substantially reduce the response size. #### Over-fetching of data -In certain cases, a large number of records or fields are unintentionally fetched, which results in exceeding [the configured query response size limit](/postgres/database/connection-pooling#response-size). This could happen when [the `where` clause](/orm/reference/prisma-client-reference#where) in the query is incorrect or entirely missing. +In certain cases, a large number of records or fields are unintentionally fetched, which results in exceeding the configured query response size limit. This could happen when [the `where` clause](/orm/reference/prisma-client-reference#where) in the query is incorrect or entirely missing. -**Suggested solution:** Configure the [query response size limit](/postgres/database/connection-pooling#response-size) to be larger. If the limit is still exceeded, double-check that the `where` clause is filtering data as expected. To prevent fetching too many records, consider using [pagination](/v6/orm/prisma-client/queries/pagination). Additionally, use the [`select`](/orm/reference/prisma-client-reference#select) clause to return only the necessary fields, reducing the response size. +**Suggested solution:** Configure the query response size limit to be larger. If the limit is still exceeded, double-check that the `where` clause is filtering data as expected. To prevent fetching too many records, consider using [pagination](/v6/orm/prisma-client/queries/pagination). Additionally, use the [`select`](/orm/reference/prisma-client-reference#select) clause to return only the necessary fields, reducing the response size. #### Fetching a large volume of data In many data processing workflows, especially those involving ETL (Extract-Transform-Load) processes or scheduled CRON jobs, there's a need to extract large amounts of data from data sources (like databases, APIs, or file systems) for analysis, reporting, or further processing. If you are running an ETL/CRON workload that fetches a huge chunk of data for analytical processing then you might run into this limit. -**Suggested solution:** Configure the [query response size limit](/postgres/database/connection-pooling#response-size) to be larger. If the limit is exceeded, consider splitting your query into batches. This approach ensures that each batch fetches only a portion of the data, preventing you from exceeding the size limit for a single operation. +**Suggested solution:** Configure the query response size limit to be larger. If the limit is exceeded, consider splitting your query into batches. This approach ensures that each batch fetches only a portion of the data, preventing you from exceeding the size limit for a single operation. ## `P6004` (`QueryTimeout`) -This error occurs when a database query fails to return a response within [the configured query timeout limit](/postgres/database/connection-pooling#query-timeout). The query timeout limit includes the duration of waiting for a connection from the pool, network latency to the database, and the execution time of the query itself. We enforce this limit to prevent unintentional long-running queries that can overload system resources. +This error occurs when a database query fails to return a response within the configured query timeout limit. The query timeout limit includes the duration of waiting for a connection from the pool, network latency to the database, and the execution time of the query itself. We enforce this limit to prevent unintentional long-running queries that can overload system resources. :::info -The time for Prisma Postgres's cross-region networking is excluded from [the configured query timeout limit](/postgres/database/connection-pooling#query-timeout) limit. +The time for Prisma Postgres's cross-region networking is excluded from the configured query timeout limit. ::: @@ -52,17 +52,17 @@ This error could be caused by numerous reasons. Some of the prominent ones are: #### High traffic and insufficient connections -If the application is receiving very high traffic and there are not a sufficient number of connections available to the database, then the queries would need to wait for a connection to become available. This situation can lead to queries waiting longer than [the configured query timeout limit](/postgres/database/connection-pooling#query-timeout) for a connection, ultimately triggering a timeout error if they do not get serviced within this duration. +If the application is receiving very high traffic and there are not a sufficient number of connections available to the database, then the queries would need to wait for a connection to become available. This situation can lead to queries waiting longer than the configured query timeout limit for a connection, ultimately triggering a timeout error if they do not get serviced within this duration. -**Suggested solution**: Review and possibly increase the `connection_limit` specified in the connection string parameter when setting up Accelerate in a platform environment ([reference](/postgres/database/connection-pooling#connection-pool-size)). This limit should align with your database's maximum number of connections. +**Suggested solution**: Review and possibly increase the `connection_limit` specified in the connection string parameter when setting up Accelerate in a platform environment. This limit should align with your database's maximum number of connections. By default, the connection limit is set to 10 unless a different `connection_limit` is specified in your database connection string. #### Long-running queries -Queries may be slow to respond, hitting [the configured query timeout limit](/postgres/database/connection-pooling#query-timeout) even when connections are available. This could happen if a very large amount of data is being fetched in a single query or if appropriate indexes are missing from the table. +Queries may be slow to respond, hitting the configured query timeout limit even when connections are available. This could happen if a very large amount of data is being fetched in a single query or if appropriate indexes are missing from the table. -**Suggested solution**: Configure the [query timeout limit](/postgres/database/connection-pooling#query-timeout) to be larger. If the limit is exceeded, identify the slow-running queries and fetch only the necessary data. Use the `select` clause to retrieve specific fields and avoid fetching unnecessary data. Additionally, consider adding appropriate indexes to improve query efficiency. You might also isolate long-running queries into separate environments to prevent them from affecting transactional queries. +**Suggested solution**: Configure the query timeout limit to be larger. If the limit is exceeded, identify the slow-running queries and fetch only the necessary data. Use the `select` clause to retrieve specific fields and avoid fetching unnecessary data. Additionally, consider adding appropriate indexes to improve query efficiency. You might also isolate long-running queries into separate environments to prevent them from affecting transactional queries. #### Database resource contention