From c86a9c33b701f6291cf9cd640a760c4b5edf70ed Mon Sep 17 00:00:00 2001 From: Vlada Chirmicci Date: Tue, 30 Dec 2025 15:22:04 +0000 Subject: [PATCH 01/10] Tidying up more applies_to tags in the Troubleshooting section Part of #4117 --- .../increase-capacity-data-node.md | 27 +++++++------------ 1 file changed, 10 insertions(+), 17 deletions(-) diff --git a/troubleshoot/elasticsearch/increase-capacity-data-node.md b/troubleshoot/elasticsearch/increase-capacity-data-node.md index 787f3b77f0..69a7ca2be2 100644 --- a/troubleshoot/elasticsearch/increase-capacity-data-node.md +++ b/troubleshoot/elasticsearch/increase-capacity-data-node.md @@ -4,32 +4,27 @@ mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/increase-capacity-data-node.html applies_to: stack: - deployment: - eck: - ess: - ece: - self: products: - id: elasticsearch --- # Increase the disk capacity of data nodes [increase-capacity-data-node] -:::::::{tab-set} +:::::::{applies-switch} -::::::{tab-item} {{ech}} +::::::{applies-item} { ess: } In order to increase the disk capacity of the data nodes in your cluster: 1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Hosted deployments** panel, click the gear under the `Manage deployment` column that corresponds to the name of your deployment. -3. If autoscaling is available but not enabled, enable it. You can do this by clicking the button `Enable autoscaling` on a banner like the one below: +2. On the **Hosted deployments** panel, select the gear under the **Manage deployment** column that corresponds to the name of your deployment. +3. If autoscaling is available but not enabled, enable it by clicking the **Enable autoscaling** button in a banner like the one below: :::{image} /troubleshoot/images/elasticsearch-reference-autoscaling_banner.png :alt: Autoscaling banner :screenshot: ::: - Or you can go to `Actions > Edit deployment`, check the checkbox `Autoscale` and click `save` at the bottom of the page. + Or you can go to **Actions > Edit deployment**, check the checkbox **Autoscale** and select **Save** from the bottom of the page. :::{image} /troubleshoot/images/elasticsearch-reference-enable_autoscaling.png :alt: Enabling autoscaling @@ -43,17 +38,15 @@ In order to increase the disk capacity of the data nodes in your cluster: :screenshot: ::: - or you can go to `Actions > Edit deployment` and look for the label `LIMIT REACHED` as shown below: + Alternatively, you can go to **Actions > Edit deployment** and look for the label `LIMIT REACHED` as shown below: + + ![Autoscaling limits](/troubleshoot/images/elasticsearch-reference-reached_autoscaling_limits.png "Autoscaling limits reached") - :::{image} /troubleshoot/images/elasticsearch-reference-reached_autoscaling_limits.png - :alt: Autoscaling limits reached - :screenshot: - ::: - If you are seeing the banner click `Update autoscaling settings` to go to the `Edit` page. Otherwise, you are already in the `Edit` page, click `Edit settings` to increase the autoscaling limits. After you perform the change click `save` at the bottom of the page. + If you are seeing the banner, click **Update autoscaling settings** to go to the **Edit** page. Otherwise, if you are already on the **Edit** page, click **Edit settings** to increase the autoscaling limits. After you perform the change select **Save** from the bottom of the page. :::::: -::::::{tab-item} Self-managed +::::::{applies-item} { self: } In order to increase the data node capacity in your cluster, you will need to calculate the amount of extra disk space needed. 1. First, retrieve the relevant disk thresholds that will indicate how much space should be available. The relevant thresholds are the [high watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high) for all the tiers apart from the frozen one and the [frozen flood stage watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-flood-stage-frozen) for the frozen tier. The following example demonstrates disk shortage in the hot tier, so we will only retrieve the high watermark: From f79d1c5c595d9efec7c4cb83bdc1f9926d199675 Mon Sep 17 00:00:00 2001 From: Vlada Chirmicci Date: Tue, 30 Dec 2025 16:15:00 +0000 Subject: [PATCH 02/10] More edits to include the Total number of shards per node has been reached topic --- .../increase-capacity-data-node.md | 4 +- .../increase-cluster-shard-limit.md | 37 +++++-------------- 2 files changed, 11 insertions(+), 30 deletions(-) diff --git a/troubleshoot/elasticsearch/increase-capacity-data-node.md b/troubleshoot/elasticsearch/increase-capacity-data-node.md index 69a7ca2be2..c148fc59e8 100644 --- a/troubleshoot/elasticsearch/increase-capacity-data-node.md +++ b/troubleshoot/elasticsearch/increase-capacity-data-node.md @@ -13,7 +13,7 @@ products: :::::::{applies-switch} ::::::{applies-item} { ess: } -In order to increase the disk capacity of the data nodes in your cluster: +To increase the disk capacity of the data nodes in your cluster: 1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. On the **Hosted deployments** panel, select the gear under the **Manage deployment** column that corresponds to the name of your deployment. @@ -47,7 +47,7 @@ In order to increase the disk capacity of the data nodes in your cluster: :::::: ::::::{applies-item} { self: } -In order to increase the data node capacity in your cluster, you will need to calculate the amount of extra disk space needed. +To increase the data node capacity in your cluster, you need to calculate the amount of extra disk space needed. 1. First, retrieve the relevant disk thresholds that will indicate how much space should be available. The relevant thresholds are the [high watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high) for all the tiers apart from the frozen one and the [frozen flood stage watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-flood-stage-frozen) for the frozen tier. The following example demonstrates disk shortage in the hot tier, so we will only retrieve the high watermark: diff --git a/troubleshoot/elasticsearch/increase-cluster-shard-limit.md b/troubleshoot/elasticsearch/increase-cluster-shard-limit.md index 2bad9edb46..d7fc726dc3 100644 --- a/troubleshoot/elasticsearch/increase-cluster-shard-limit.md +++ b/troubleshoot/elasticsearch/increase-cluster-shard-limit.md @@ -4,11 +4,6 @@ mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/increase-cluster-shard-limit.html applies_to: stack: - deployment: - eck: - ess: - ece: - self: products: - id: elasticsearch --- @@ -23,28 +18,14 @@ You might want to influence this data distribution by configuring the [`cluster. To fix this issue, complete the following steps: -:::::::{tab-set} +:::::::{applies-switch} -::::::{tab-item} {{ech}} -In order to get the shards assigned we’ll need to increase the number of shards that can be collocated on a node in the cluster. We’ll achieve this by inspecting the system-wide `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) and increasing the configured value. +::::::{applies-item} { ess: } +To get the shards assigned, you need to increase the number of shards that can be collocated on a node in the cluster. You achieve this by inspecting the system-wide `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) and increasing the configured value. -**Use {{kib}}** +You can run the following steps using either [API console](/explore-analyze/query-filter/tools/console.md) or direct [Elasticsearch API](elasticsearch://reference/elasticsearch/rest-apis/index.md) calls. -1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Hosted deployments** panel, click the name of your deployment. - - ::::{note} - If the name of your deployment is disabled your {{kib}} instances might be unhealthy, in which case contact [Elastic Support](https://support.elastic.co). If your deployment doesn’t include {{kib}}, all you need to do is [enable it first](../../deploy-manage/deploy/elastic-cloud/access-kibana.md). - :::: - -3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - - :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png - :alt: {{kib}} Console - :screenshot: - ::: - -4. Inspect the `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings): +1. Inspect the `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings): ```console GET /_cluster/settings?flat_settings @@ -63,7 +44,7 @@ In order to get the shards assigned we’ll need to increase the number of shard 1. Represents the current configured value for the total number of shards that can reside on one node in the system. -5. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) the value for the total number of shards that can be assigned on one node to a higher value: +1. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) the value for the total number of shards that can be assigned on one node to a higher value: ```console PUT _cluster/settings @@ -77,8 +58,8 @@ In order to get the shards assigned we’ll need to increase the number of shard 1. The new value for the system-wide `total_shards_per_node` configuration is increased from the previous value of `300` to `400`. The `total_shards_per_node` configuration can also be set to `null`, which represents no upper bound with regards to how many shards can be collocated on one node in the system. :::::: -::::::{tab-item} Self-managed -In order to get the shards assigned you can add more nodes to your {{es}} cluster and assign the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. +::::::{applies-item} { self: } +To get the shards assigned, you can add more nodes to your {{es}} cluster and assign the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. To inspect which tier is an index targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: @@ -109,7 +90,7 @@ Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspec GET /_cluster/settings?flat_settings ``` - The response will look like this: + The response looks like this: ```console-result { From 72e13717631f7704f6491e06550c7be64215f3b4 Mon Sep 17 00:00:00 2001 From: Vlada Chirmicci Date: Tue, 30 Dec 2025 16:24:31 +0000 Subject: [PATCH 03/10] Adding the Total number of shards for an index on a single node exceeded topic as well --- .../elasticsearch/increase-shard-limit.md | 35 +++++-------------- 1 file changed, 8 insertions(+), 27 deletions(-) diff --git a/troubleshoot/elasticsearch/increase-shard-limit.md b/troubleshoot/elasticsearch/increase-shard-limit.md index 99f9c27b23..0c15a40879 100644 --- a/troubleshoot/elasticsearch/increase-shard-limit.md +++ b/troubleshoot/elasticsearch/increase-shard-limit.md @@ -4,11 +4,6 @@ mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/increase-shard-limit.html applies_to: stack: - deployment: - eck: - ess: - ece: - self: products: - id: elasticsearch --- @@ -23,28 +18,14 @@ You might want to influence this data distribution by configuring the [index.rou To fix this issue, complete the following steps: -:::::::{tab-set} +:::::::{applies-switch} -::::::{tab-item} {{ech}} -In order to get the shards assigned we’ll need to increase the number of shards that can be collocated on a node. We’ll achieve this by inspecting the configuration for the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) and increasing the configured value for the indices that have shards unassigned. +::::::{applies-item} { ess: } +To get the shards assigned, you need to increase the number of shards that can be collocated on a node. You achieve this by inspecting the configuration for the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) and increasing the configured value for the indices that have shards unassigned. -**Use {{kib}}** +You can run the following steps using either [API console](/explore-analyze/query-filter/tools/console.md) or direct [Elasticsearch API](elasticsearch://reference/elasticsearch/rest-apis/index.md) calls. -1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Hosted deployments** panel, click the name of your deployment. - - ::::{note} - If the name of your deployment is disabled your {{kib}} instances might be unhealthy, in which case contact [Elastic Support](https://support.elastic.co). If your deployment doesn’t include {{kib}}, all you need to do is [enable it first](../../deploy-manage/deploy/elastic-cloud/access-kibana.md). - :::: - -3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - - :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png - :alt: {{kib}} Console - :screenshot: - ::: - -4. Inspect the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) for the index with unassigned shards: +1. Inspect the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) for the index with unassigned shards: ```console GET /my-index-000001/_settings/index.routing.allocation.total_shards_per_node?flat_settings @@ -64,7 +45,7 @@ In order to get the shards assigned we’ll need to increase the number of shard 1. Represents the current configured value for the total number of shards that can reside on one node for the `my-index-000001` index. -5. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of shards that can be assigned on one node to a higher value: +1. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of shards that can be assigned on one node to a higher value: ```console PUT /my-index-000001/_settings @@ -78,8 +59,8 @@ In order to get the shards assigned we’ll need to increase the number of shard 1. The new value for the `total_shards_per_node` configuration for the `my-index-000001` index is increased from the previous value of `1` to `2`. The `total_shards_per_node` configuration can also be set to `-1`, which represents no upper bound with regards to how many shards of the same index can reside on one node. :::::: -::::::{tab-item} Self-managed -In order to get the shards assigned you can add more nodes to your {{es}} cluster and assing the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. +::::::{applies-item} { self: } +To get the shards assigned, you can add more nodes to your {{es}} cluster and assing the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. To inspect which tier is an index targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: From 7b2eb01fceebefd79c012b713cfb99df866e30bb Mon Sep 17 00:00:00 2001 From: Vlada Chirmicci Date: Tue, 30 Dec 2025 16:43:44 +0000 Subject: [PATCH 04/10] Adding the Warning: Not enough nodes to allocate all shard replicas topic --- .../elasticsearch/increase-tier-capacity.md | 57 +++++++------------ 1 file changed, 20 insertions(+), 37 deletions(-) diff --git a/troubleshoot/elasticsearch/increase-tier-capacity.md b/troubleshoot/elasticsearch/increase-tier-capacity.md index a3492763e3..92ed2e2634 100644 --- a/troubleshoot/elasticsearch/increase-tier-capacity.md +++ b/troubleshoot/elasticsearch/increase-tier-capacity.md @@ -4,11 +4,6 @@ mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/increase-tier-capacity.html applies_to: stack: - deployment: - eck: - ess: - ece: - self: products: - id: elasticsearch --- @@ -21,27 +16,14 @@ If a warning is encountered with not enough nodes to allocate all shard replicas To accomplish this, complete the following steps: -:::::::{tab-set} +:::::::{applies-switch} -::::::{tab-item} {{ech}} -One way to get the replica shards assigned is to add an availability zone. This will increase the number of data nodes in the {{es}} cluster so that the replica shards can be assigned. This can be done by editing your deployment. But first you need to discover which tier an index is targeting for assignment. Do this using {{kib}}. +::::::{applies-item} { ess: } +You can get the replica shards assigned by adding an availability zone. This action increases the number of data nodes in the {{es}} cluster so that the replica shards can be assigned. You achieve this by editing your deployment. -**Use {{kib}}** - -1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Hosted deployments** panel, click the name of your deployment. - - ::::{note} - If the name of your deployment is disabled your {{kib}} instances might be unhealthy, in which case contact [Elastic Support](https://support.elastic.co). If your deployment doesn’t include {{kib}}, all you need to do is [enable it first](../../deploy-manage/deploy/elastic-cloud/access-kibana.md). - :::: - -3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - - :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png - :alt: {{kib}} Console - :screenshot: - ::: +This procedure includes steps you complete in {{kib}}, as well as steps using either [API console](/explore-analyze/query-filter/tools/console.md) or direct [Elasticsearch API](elasticsearch://reference/elasticsearch/rest-apis/index.md) calls. +First, you need to discover which tier an index is targeting for assignment. To inspect which tier an index is targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: @@ -49,7 +31,7 @@ To inspect which tier an index is targeting for assignment, use the [get index s GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings ``` -The response will look like this: +The response looks like this: ```console-result { @@ -64,7 +46,9 @@ The response will look like this: 1. Represents a comma separated list of data tier node roles this index is allowed to be allocated on, the first one in the list being the one with the higher priority i.e. the tier the index is targeting. e.g. in this example the tier preference is `data_warm,data_hot` so the index is targeting the `warm` tier and more nodes with the `data_warm` role are needed in the {{es}} cluster. -Now that you know the tier, you want to increase the number of nodes in that tier so that the replicas can be allocated. To do this you can either increase the size per zone to increase the number of nodes in the availability zone(s) you were already using, or increase the number of availability zones. Go back to the deployment’s landing page by clicking on the three horizontal bars on the top left of the screen and choosing **Manage this deployment**. On that page click the **Manage** button, and choose **Edit deployment**. Note that you must be logged in to [https://cloud.elastic.co/](https://cloud.elastic.co/) in order to do this. In the {{es}} section, find the tier where the replica shards could not be assigned. +Now that you know the tier, you want to increase the number of nodes in that tier so that the replicas can be allocated. To do this you can either increase the size per zone to increase the number of nodes in the availability zone(s) you were already using, or increase the number of availability zones. + +In {{kib}}, go to the deployment’s landing page by clicking on the three horizontal bars on the top left of the screen and choosing **Manage this deployment**. On that page click the **Manage** button, and choose **Edit deployment**. Note that you must be logged in to [https://cloud.elastic.co/](https://cloud.elastic.co/) in order to do this. In the {{es}} section, find the tier where the replica shards could not be assigned. :::{image} /troubleshoot/images/elasticsearch-reference-ess-advanced-config-data-tiers.png :alt: {{kib}} Console @@ -80,16 +64,15 @@ Now that you know the tier, you want to increase the number of nodes in that tie * Find the **Availability zones** selection. If it is less than 3, you can select a higher number of availability zones for that tier. -If it is not possible to increase the size per zone or the number of availability zones, you can reduce the number of replicas of your index data. We’ll achieve this by inspecting the [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-number-of-replicas) index setting index setting and decreasing the configured value. +If it is not possible to increase the size per zone or the number of availability zones, you can reduce the number of replicas of your index data. You achieve this by inspecting the [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-number-of-replicas) index setting index setting and decreasing the configured value. -1. Access {{kib}} as described above. -2. Inspect the [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-number-of-replicas) index setting. +1. Inspect the [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-number-of-replicas) index setting. ```console GET /my-index-000001/_settings/index.number_of_replicas ``` - The response will look like this: + The response looks like this: ```console-result { @@ -105,13 +88,13 @@ If it is not possible to increase the size per zone or the number of availabilit 1. Represents the currently configured value for the number of replica shards required for the index -3. Use the `_cat/nodes` API to find the number of nodes in the target tier: +1. Use the `_cat/nodes` API to find the number of nodes in the target tier: ```console GET /_cat/nodes?h=node.role ``` - The response will look like this, containing one row per node: + The response looks like this, containing one row per node: ```console-result himrst @@ -121,7 +104,7 @@ If it is not possible to increase the size per zone or the number of availabilit You can count the rows containing the letter representing the target tier to know how many nodes you have. See [Query parameters](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) for details. The example above has two rows containing `h`, so there are two nodes in the hot tier. -4. [Decrease](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of replica shards required for this index. As replica shards cannot reside on the same node as primary shards for [high availability](../../deploy-manage/production-guidance/availability-and-resilience.md), the new value needs to be less than or equal to the number of nodes found above minus one. Since the example above found 2 nodes in the hot tier, the maximum value for `index.number_of_replicas` is 1. +1. [Decrease](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of replica shards required for this index. As replica shards cannot reside on the same node as primary shards for [high availability](../../deploy-manage/production-guidance/availability-and-resilience.md), the new value needs to be less than or equal to the number of nodes found above minus one. Since the example above found 2 nodes in the hot tier, the maximum value for `index.number_of_replicas` is 1. ```console PUT /my-index-000001/_settings @@ -135,8 +118,8 @@ If it is not possible to increase the size per zone or the number of availabilit 1. The new value for the `index.number_of_replicas` index configuration is decreased from the previous value of `2` to `1`. It can be set as low as 0 but configuring it to 0 for indices other than [searchable snapshot indices](../../deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md) may lead to temporary availability loss during node restarts or permanent data loss in case of data corruption. :::::: -::::::{tab-item} Self-managed -In order to get the replica shards assigned you can add more nodes to your {{es}} cluster and assign the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. +::::::{applies-item} { self: } +To get the replica shards assigned, you can add more nodes to your {{es}} cluster and assign the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. To inspect which tier an index is targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: @@ -144,7 +127,7 @@ To inspect which tier an index is targeting for assignment, use the [get index s GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings ``` -The response will look like this: +The response looks like this: ```console-result { @@ -167,7 +150,7 @@ Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspec GET /my-index-000001/_settings/index.number_of_replicas ``` - The response will look like this: + The response looks like this: ```console-result { @@ -189,7 +172,7 @@ Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspec GET /_cat/nodes?h=node.role ``` - The response will look like this, containing one row per node: + The response looks like this, containing one row per node: ```console-result himrst From f5f594b503a3139bd7f1953415fe4a24e229d99d Mon Sep 17 00:00:00 2001 From: Vlada Chirmicci Date: Fri, 2 Jan 2026 11:29:32 +0000 Subject: [PATCH 05/10] Adding review suggestions for Increase the disk capacity of data nodes --- .../increase-capacity-data-node.md | 122 +++++++++++++----- 1 file changed, 89 insertions(+), 33 deletions(-) diff --git a/troubleshoot/elasticsearch/increase-capacity-data-node.md b/troubleshoot/elasticsearch/increase-capacity-data-node.md index c148fc59e8..40bc079845 100644 --- a/troubleshoot/elasticsearch/increase-capacity-data-node.md +++ b/troubleshoot/elasticsearch/increase-capacity-data-node.md @@ -12,38 +12,20 @@ products: :::::::{applies-switch} -::::::{applies-item} { ess: } -To increase the disk capacity of the data nodes in your cluster: - -1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Hosted deployments** panel, select the gear under the **Manage deployment** column that corresponds to the name of your deployment. -3. If autoscaling is available but not enabled, enable it by clicking the **Enable autoscaling** button in a banner like the one below: - - :::{image} /troubleshoot/images/elasticsearch-reference-autoscaling_banner.png - :alt: Autoscaling banner - :screenshot: - ::: - - Or you can go to **Actions > Edit deployment**, check the checkbox **Autoscale** and select **Save** from the bottom of the page. - - :::{image} /troubleshoot/images/elasticsearch-reference-enable_autoscaling.png - :alt: Enabling autoscaling - :screenshot: - ::: - -4. If autoscaling has succeeded the cluster should return to `healthy` status. If the cluster is still out of disk, check if autoscaling has reached its limits. You will be notified about this by the following banner: +::::::{applies-item} { ess:, ece: } - :::{image} /troubleshoot/images/elasticsearch-reference-autoscaling_limits_banner.png - :alt: Autoscaling banner - :screenshot: - ::: - - Alternatively, you can go to **Actions > Edit deployment** and look for the label `LIMIT REACHED` as shown below: - - ![Autoscaling limits](/troubleshoot/images/elasticsearch-reference-reached_autoscaling_limits.png "Autoscaling limits reached") +:::{warning} +:applies_to: ece: +In ECE, resizing is limited by your [allocator capacity](/deploy-manage/deploy/cloud-enterprise/ece-manage-capacity.md). +::: +To increase the disk capacity of the data nodes in your cluster: - If you are seeing the banner, click **Update autoscaling settings** to go to the **Edit** page. Otherwise, if you are already on the **Edit** page, click **Edit settings** to increase the autoscaling limits. After you perform the change select **Save** from the bottom of the page. +1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body) or ECE Cloud UI. +1. On the home page, find your deployment and select **Manage**. +1. Go to **Actions** > **Edit deployment** and check that autoscaling is enabled. Adjust the **Enable Autoscaling for** dropdown menu as needed and select **Save**. +1. If autoscaling is successful, the cluster returns to a `healthy` status. +If the cluster is still out of disk, check if autoscaling has reached its set limits and [update your autoscaling settings](/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md#ec-autoscaling-update). :::::: ::::::{applies-item} { self: } @@ -78,7 +60,7 @@ To increase the data node capacity in your cluster, you need to calculate the am The above means that in order to resolve the disk shortage we need to either drop our disk usage below the 90% or have more than 150GB available, read more on how this threshold works [here](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high). -2. The next step is to find out the current disk usage, this will indicate how much extra space is needed. For simplicity, our example has one node, but you can apply the same for every node over the relevant threshold. +1. The next step is to find out the current disk usage, this will indicate how much extra space is needed. For simplicity, our example has one node, but you can apply the same for every node over the relevant threshold. ```console GET _cat/allocation?v&s=disk.avail&h=node,disk.percent,disk.avail,disk.total,disk.used,disk.indices,shards @@ -91,12 +73,12 @@ To increase the data node capacity in your cluster, you need to calculate the am instance-0000000000 91 4.6gb 35gb 31.1gb 29.9gb 111 ``` -3. The high watermark configuration indicates that the disk usage needs to drop below 90%. To achieve this, 2 things are possible: +1. The high watermark configuration indicates that the disk usage needs to drop below 90%. To achieve this, 2 things are possible: * to add an extra data node to the cluster (this requires that you have more than one shard in your cluster), or * to extend the disk space of the current node by approximately 20% to allow this node to drop to 70%. This will give enough space to this node to not run out of space soon. -4. In the case of adding another data node, the cluster will not recover immediately. It might take some time to relocate some shards to the new node. You can check the progress here: +1. In the case of adding another data node, the cluster will not recover immediately. It might take some time to relocate some shards to the new node. You can check the progress here: ```console GET /_cat/shards?v&h=state,node&s=state @@ -105,4 +87,78 @@ To increase the data node capacity in your cluster, you need to calculate the am If in the response the shards' state is `RELOCATING`, it means that shards are still moving. Wait until all shards turn to `STARTED` or until the health disk indicator turns to `green`. :::::: -::::::: +::::::{applies-item} { eck: } +To increase the disk capacity of data nodes in your {{eck}} cluster, you can either add more data nodes or increase the storage size of existing nodes. + +**Option 1: Add more data nodes** + +1. Update the `count` field in your data node NodeSet to add more nodes: + + ```yaml subs=true + apiVersion: elasticsearch.k8s.elastic.co/v1 + kind: Elasticsearch + metadata: + name: quickstart + spec: + version: {{version.stack}} + nodeSets: + - name: data-nodes + count: 5 # Increase from previous count + config: + node.roles: ["data"] + volumeClaimTemplates: + - metadata: + name: elasticsearch-data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Gi + ``` + +1. Apply the changes: + + ```sh + kubectl apply -f your-elasticsearch-manifest.yaml + ``` + + ECK automatically creates the new nodes and {{es}} will relocate shards to balance the load. You can monitor the progress using: + + ```console + GET /_cat/shards?v&h=state,node&s=state + ``` + +**Option 2: Increase storage size of existing nodes** + +1. If your storage class supports [volume expansion](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims), you can increase the storage size in the `volumeClaimTemplates`: + + ```yaml subs=true + apiVersion: elasticsearch.k8s.elastic.co/v1 + kind: Elasticsearch + metadata: + name: quickstart + spec: + version: {{version.stack}} + nodeSets: + - name: data-nodes + count: 3 + config: + node.roles: ["data"] + volumeClaimTemplates: + - metadata: + name: elasticsearch-data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 200Gi # Increased from previous size + ``` + +1. Apply the changes. If the volume driver supports `ExpandInUsePersistentVolumes`, the filesystem will be resized online without restarting {{es}}. Otherwise, you may need to manually delete the Pods after the resize so they can be recreated with the expanded filesystem. + +For more information, refer to [](/deploy-manage/deploy/cloud-on-k8s/update-deployments.md) and [](/deploy-manage/deploy/cloud-on-k8s/volume-claim-templates.md). + +:::::: +::::::: \ No newline at end of file From 3f75fef9d7e1f3c3d7f5b745d424b1d82388b64e Mon Sep 17 00:00:00 2001 From: Vlada Chirmicci Date: Mon, 5 Jan 2026 15:11:20 +0000 Subject: [PATCH 06/10] Implement review feedback for the total number of shards per node has been reached page --- .../increase-cluster-shard-limit.md | 131 ++++++++++-------- 1 file changed, 70 insertions(+), 61 deletions(-) diff --git a/troubleshoot/elasticsearch/increase-cluster-shard-limit.md b/troubleshoot/elasticsearch/increase-cluster-shard-limit.md index d7fc726dc3..ea7ced7a1a 100644 --- a/troubleshoot/elasticsearch/increase-cluster-shard-limit.md +++ b/troubleshoot/elasticsearch/increase-cluster-shard-limit.md @@ -14,60 +14,77 @@ products: {{es}} takes advantage of all available resources by distributing data (index shards) among the cluster nodes. -You might want to influence this data distribution by configuring the [`cluster.routing.allocation.total_shards_per_node`](elasticsearch://reference/elasticsearch/index-settings/total-shards-per-node.md#cluster-total-shards-per-node) system setting to restrict the number of shards that can be hosted on a single node in the system, regardless of the index. Various configurations limiting how many shards can be hosted on a single node can lead to shards being unassigned, because the cluster does not have enough nodes to satisfy the configuration. +You can influence the data distribution by configuring the [`cluster.routing.allocation.total_shards_per_node`](elasticsearch://reference/elasticsearch/index-settings/total-shards-per-node.md#cluster-total-shards-per-node) dynamic cluster setting to restrict the number of shards that can be hosted on a single node in the cluster. -To fix this issue, complete the following steps: +In earlier {{es}} versions, `cluster.routing.allocation.total_shards_per_node` is set to `1000`. Reaching that limit causes the following error: `Total number of shards per node has been reached` and requires an adjustment of this cluster setting. -:::::::{applies-switch} +Various configurations limiting how many shards can be hosted on a single node can lead to shards being unassigned, because the cluster does not have enough nodes to satisfy the configuration. +To ensure that each node carries a reasonable shard load, you might need to resize your deployment. -::::::{applies-item} { ess: } -To get the shards assigned, you need to increase the number of shards that can be collocated on a node in the cluster. You achieve this by inspecting the system-wide `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) and increasing the configured value. +Follow these steps to resolve this issue: -You can run the following steps using either [API console](/explore-analyze/query-filter/tools/console.md) or direct [Elasticsearch API](elasticsearch://reference/elasticsearch/rest-apis/index.md) calls. +1. [Check and adjust the cluster shard limit](#adjust-cluster-shard-limit) to determine the current value and increase it if needed. +1. [Determine which data tier needs more capacity](#determine-data-tier) to identify the tier where shards need to be allocated. +1. [Resize your deployment](#resize-deployment) to add capacity and accommodate additional shards. -1. Inspect the `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings): - ```console - GET /_cluster/settings?flat_settings - ``` +## Check and adjust the cluster shard limit [adjust-cluster-shard-limit] - The response will look like this: +The `cluster.routing.allocation.total_shards_per_node` setting controls the maximum number of shards that can be allocated to each node in a cluster. When this limit is reached, {{es}} cannot assign new shards to that node, leading to unassigned shards in your cluster. - ```console-result - { - "persistent": { - "cluster.routing.allocation.total_shards_per_node": "300" <1> - }, - "transient": {} - } - ``` +By checking the current value and increasing it, you allow more shards to be collocated on each node, which might resolve the allocation issue without adding more capacity to your cluster. - 1. Represents the current configured value for the total number of shards that can reside on one node in the system. +You can run the following steps using either [API console](/explore-analyze/query-filter/tools/console.md) or direct [{{es}} API](elasticsearch://reference/elasticsearch/rest-apis/index.md) calls. -1. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) the value for the total number of shards that can be assigned on one node to a higher value: +### Check the current setting [check-the-shard-limiting-setting] - ```console - PUT _cluster/settings - { - "persistent" : { - "cluster.routing.allocation.total_shards_per_node" : 400 <1> - } - } - ``` +Use the [get cluster-wide settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) API to inspect the current value of `cluster.routing.allocation.total_shards_per_node`: - 1. The new value for the system-wide `total_shards_per_node` configuration is increased from the previous value of `300` to `400`. The `total_shards_per_node` configuration can also be set to `null`, which represents no upper bound with regards to how many shards can be collocated on one node in the system. -:::::: +```console +GET /_cluster/settings?flat_settings +``` -::::::{applies-item} { self: } -To get the shards assigned, you can add more nodes to your {{es}} cluster and assign the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. +The response looks like this: + +```console-result +{ + "persistent": { + "cluster.routing.allocation.total_shards_per_node": "300" <1> + }, + "transient": {} +} +``` + +1. Represents the current configured value for the total number of shards that can reside on one node in the cluster. If the value is null or absent, no explicit limit is configured. + +### Increase the setting + +Use the [update the cluster settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) API to increase the value to a higher number that accommodates your workload: + +```console +PUT _cluster/settings +{ + "persistent" : { + "cluster.routing.allocation.total_shards_per_node" : 400 <1> + } +} +``` + +1. The new value for the system-wide `total_shards_per_node` configuration is increased from the previous value of `300` to `400`. The `total_shards_per_node` configuration can also be set to `null`, which represents no upper bound with regards to how many shards can be collocated on one node in the system. + + + +## Determine which data tier needs more capacity [determine-data-tier] + +If increasing the cluster shard limit alone doesn't resolve the issue, or if you want to distribute shards more evenly, you need to identify which [data tier](/manage-data/lifecycle/data-tiers.md) requires additional capacity. -To inspect which tier is an index targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: +Use the [get index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: ```console GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings ``` -The response will look like this: +The response looks like this: ```console-result { @@ -79,42 +96,34 @@ The response will look like this: } ``` -1. Represents a comma separated list of data tier node roles this index is allowed to be allocated on, the first one in the list being the one with the higher priority i.e. the tier the index is targeting. e.g. in this example the tier preference is `data_warm,data_hot` so the index is targeting the `warm` tier and more nodes with the `data_warm` role are needed in the {{es}} cluster. +1. Represents a comma-separated list of data tier node roles this index is allowed to be allocated on. The first tier in the list has the highest priority and is the tier the index is targeting. In this example, the tier preference is `data_warm,data_hot`, so the index is targeting the `warm` tier. If the warm tier lacks capacity, the index will fall back to the `data_hot` tier. -Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspecting the system-wide `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) and increasing the configured value: -1. Inspect the `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) for the index with unassigned shards: - ```console - GET /_cluster/settings?flat_settings - ``` +## Resize your deployment [resize-deployment] - The response looks like this: +After you've identified the tier that needs more capacity, you can resize your deployment to distribute the shard load and allow previously unassigned shards to be allocated. - ```console-result - { - "persistent": { - "cluster.routing.allocation.total_shards_per_node": "300" <1> - }, - "transient": {} - } - ``` +:::::::{applies-switch} - 1. Represents the current configured value for the total number of shards that can reside on one node in the system. +::::::{applies-item} { ess:, ece: } +To enable a new tier in your {{ech}} deployment, you edit the deployment topology to add a new data tier. -2. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) the value for the total number of shards that can be assigned on one node to a higher value: +1. In {{kib}}, open your deployment’s navigation menu (placed under the Elastic logo in the upper left corner) and go to **Manage this deployment**. +1. From the right hand side, click to expand the **Manage** dropdown button and select **Edit deployment** from the list of options. +1. On the **Edit** page, click on **+ Add Capacity** for the tier you identified you need to enable in your deployment. Choose the desired size and availability zones for the new tier. +1. Navigate to the bottom of the page and click the **Save** button. - ```console - PUT _cluster/settings - { - "persistent" : { - "cluster.routing.allocation.total_shards_per_node" : 400 <1> - } - } - ``` +:::::: + +::::::{applies-item} { self: } +Add more nodes to your {{es}} cluster and assign the index’s target tier [node role](/manage-data/lifecycle/data-tiers.md#configure-data-tiers-on-premise) to the new nodes, by adjusting the configuration in `elasticsearch.yml`. + +:::::: - 1. The new value for the system-wide `total_shards_per_node` configuration is increased from the previous value of `300` to `400`. The `total_shards_per_node` configuration can also be set to `null`, which represents no upper bound with regards to how many shards can be collocated on one node in the system. +::::::{applies-item} { eck: } +Add more nodes to your {{es}} cluster and assign the index’s target tier [node role](/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md#change-node-role) to the new nodes, by adjusting the [node configuration](/deploy-manage/deploy/cloud-on-k8s/node-configuration.md) in the `spec` section of your {{es}} resource manifest. :::::: ::::::: From 108597c06ca8bfb9f246672ecfdf0c7608251b0b Mon Sep 17 00:00:00 2001 From: Vlada Chirmicci Date: Mon, 5 Jan 2026 16:40:34 +0000 Subject: [PATCH 07/10] Applying review suggestions for the Total number of shards for an index on a single node exceeded page --- .../elasticsearch/increase-shard-limit.md | 125 +++++++++--------- 1 file changed, 65 insertions(+), 60 deletions(-) diff --git a/troubleshoot/elasticsearch/increase-shard-limit.md b/troubleshoot/elasticsearch/increase-shard-limit.md index 0c15a40879..ac6a24da8a 100644 --- a/troubleshoot/elasticsearch/increase-shard-limit.md +++ b/troubleshoot/elasticsearch/increase-shard-limit.md @@ -14,61 +14,75 @@ products: {{es}} takes advantage of all available resources by distributing data (index shards) among the cluster nodes. -You might want to influence this data distribution by configuring the [index.routing.allocation.total_shards_per_node](elasticsearch://reference/elasticsearch/index-settings/total-shards-per-node.md#total-shards-per-node) index setting to a custom value (for example, `1` in case of a highly trafficked index). Various configurations limiting how many shards an index can have located on one node can lead to shards being unassigned, because the cluster does not have enough nodes to satisfy the index configuration. +You can influence this data distribution by configuring the [index.routing.allocation.total_shards_per_node](elasticsearch://reference/elasticsearch/index-settings/total-shards-per-node.md#total-shards-per-node) dynamic index setting to restrict the maximum number of shards from a single index that can be allocated to a node. +For example, in case of a highly trafficked index, the value can be set to `1`. +Various configurations limiting how many shards an index can have located on one node can lead to shards being unassigned, because the cluster does not have enough nodes to satisfy the index configuration. To fix this issue, complete the following steps: -:::::::{applies-switch} +1. [Check and adjust the index allocation settings](#adjust-index-allocation-settings) to determine the current value and increase it if needed. +1. [Determine which data tier needs more capacity](#determine-data-tier) to identify the tier where shards need to be allocated. +1. [Resize your deployment](#resize-deployment) to add capacity and accommodate additional shards. -::::::{applies-item} { ess: } -To get the shards assigned, you need to increase the number of shards that can be collocated on a node. You achieve this by inspecting the configuration for the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) and increasing the configured value for the indices that have shards unassigned. -You can run the following steps using either [API console](/explore-analyze/query-filter/tools/console.md) or direct [Elasticsearch API](elasticsearch://reference/elasticsearch/rest-apis/index.md) calls. -1. Inspect the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) for the index with unassigned shards: +## Check and adjust the index allocation settings [adjust-index-allocation-settings] - ```console - GET /my-index-000001/_settings/index.routing.allocation.total_shards_per_node?flat_settings - ``` +The `index.routing.allocation.total_shards_per_node` setting controls the maximum number of shards that can be collocated on a node in your cluster. When this limit is reached, {{es}} cannot assign new shards to that node, leading to unassigned shards in your cluster. - The response will look like this: +By checking the current value and increasing it, you allow more shards to be collocated on each node, which might resolve the allocation issue without adding more capacity to your cluster. - ```console-result - { - "my-index-000001": { - "settings": { - "index.routing.allocation.total_shards_per_node": "1" <1> - } - } - } - ``` +You can run the following steps using either [API console](/explore-analyze/query-filter/tools/console.md) or direct [{{es}} API](elasticsearch://reference/elasticsearch/rest-apis/index.md) calls. + +### Check the current index setting [check-the-index-setting] - 1. Represents the current configured value for the total number of shards that can reside on one node for the `my-index-000001` index. +Use the [get index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to inspect the `index.routing.allocation.total_shards_per_node` value for the index with unassigned shards: -1. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of shards that can be assigned on one node to a higher value: +```console +GET /my-index-000001/_settings/index.routing.allocation.total_shards_per_node?flat_settings +``` - ```console - PUT /my-index-000001/_settings - { - "index" : { - "routing.allocation.total_shards_per_node" : "2" <1> - } +The response looks like this: + +```console-result +{ + "my-index-000001": { + "settings": { + "index.routing.allocation.total_shards_per_node": "1" <1> } - ``` + } +} +``` - 1. The new value for the `total_shards_per_node` configuration for the `my-index-000001` index is increased from the previous value of `1` to `2`. The `total_shards_per_node` configuration can also be set to `-1`, which represents no upper bound with regards to how many shards of the same index can reside on one node. -:::::: +1. Represents the current configured value for the total number of shards that can reside on one node for the `my-index-000001` index. -::::::{applies-item} { self: } -To get the shards assigned, you can add more nodes to your {{es}} cluster and assing the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. +### Increase the setting -To inspect which tier is an index targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: +Use the [update index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) API to increase the value for the total number of shards that can be assigned on a node to a higher value that accommodates your workload: + +```console +PUT /my-index-000001/_settings +{ + "index" : { + "routing.allocation.total_shards_per_node" : "2" <1> + } +} +``` + +1. The new value for the `total_shards_per_node` configuration for the `my-index-000001` index is increased from the previous value of `1` to `2`. The `total_shards_per_node` configuration can also be set to `-1`, which represents no upper bound with regards to how many shards of the same index can reside on one node. + + +## Determine which data tier needs more capacity [determine-data-tier] + +If increasing the index shard limit alone doesn't resolve the issue, or if you want to distribute shards more evenly, you need to identify which [data tier](/manage-data/lifecycle/data-tiers.md) requires additional capacity. + +Use the [get index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: ```console GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings ``` -The response will look like this: +The response looks like this: ```console-result { @@ -80,43 +94,34 @@ The response will look like this: } ``` -1. Represents a comma separated list of data tier node roles this index is allowed to be allocated on, the first one in the list being the one with the higher priority i.e. the tier the index is targeting. e.g. in this example the tier preference is `data_warm,data_hot` so the index is targeting the `warm` tier and more nodes with the `data_warm` role are needed in the {{es}} cluster. +1. Represents a comma-separated list of data tier node roles this index is allowed to be allocated on. The first tier in the list has the highest priority and is the tier the index is targeting. In this example, the tier preference is `data_warm,data_hot`, so the index is targeting the `warm` tier. If the warm tier lacks capacity, the index will fall back to the `data_hot` tier. +## Resize your deployment [resize-deployment] -Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspecting the configuration for the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) and increasing the configured value will allow more shards to be assigned on the same node. +After you've identified the tier that needs more capacity, you can resize your deployment to distribute the shard load and allow previously unassigned shards to be allocated. -1. Inspect the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) for the index with unassigned shards: +:::::::{applies-switch} - ```console - GET /my-index-000001/_settings/index.routing.allocation.total_shards_per_node?flat_settings - ``` +::::::{applies-item} { ess:, ece: } +To enable a new tier in your {{ech}} deployment, you edit the deployment topology to add a new data tier. - The response will look like this: +1. In {{kib}}, open your deployment’s navigation menu (placed under the Elastic logo in the upper left corner) and go to **Manage this deployment**. +1. From the right hand side, click to expand the **Manage** dropdown button and select **Edit deployment** from the list of options. +1. On the **Edit** page, click on **+ Add Capacity** for the tier you identified you need to enable in your deployment. Choose the desired size and availability zones for the new tier. +1. Navigate to the bottom of the page and click the **Save** button. - ```console-result - { - "my-index-000001": { - "settings": { - "index.routing.allocation.total_shards_per_node": "1" <1> - } - } - } - ``` +:::::: - 1. Represents the current configured value for the total number of shards that can reside on one node for the `my-index-000001` index. +::::::{applies-item} { self: } +Add more nodes to your {{es}} cluster and assign the index’s target tier [node role](/manage-data/lifecycle/data-tiers.md#configure-data-tiers-on-premise) to the new nodes, by adjusting the configuration in `elasticsearch.yml`. -2. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the total number of shards that can be assigned on one node or reset the value to unbounded (`-1`): +:::::: - ```console - PUT /my-index-000001/_settings - { - "index" : { - "routing.allocation.total_shards_per_node" : -1 - } - } - ``` +::::::{applies-item} { eck: } +Add more nodes to your {{es}} cluster and assign the index’s target tier [node role](/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md#change-node-role) to the new nodes, by adjusting the [node configuration](/deploy-manage/deploy/cloud-on-k8s/node-configuration.md) in the `spec` section of your {{es}} resource manifest. :::::: + ::::::: :::{include} /deploy-manage/_snippets/autoops-callout-with-ech.md ::: \ No newline at end of file From 5768cb677c718dcf73c016aa3b7224e997315a48 Mon Sep 17 00:00:00 2001 From: Vlada Chirmicci Date: Mon, 5 Jan 2026 17:52:50 +0000 Subject: [PATCH 08/10] Applying review feedback to Warning: Not enough nodes to allocate all shard replicas --- .../elasticsearch/increase-tier-capacity.md | 127 +++++------------- 1 file changed, 34 insertions(+), 93 deletions(-) diff --git a/troubleshoot/elasticsearch/increase-tier-capacity.md b/troubleshoot/elasticsearch/increase-tier-capacity.md index 92ed2e2634..b9aa0b51e3 100644 --- a/troubleshoot/elasticsearch/increase-tier-capacity.md +++ b/troubleshoot/elasticsearch/increase-tier-capacity.md @@ -16,16 +16,15 @@ If a warning is encountered with not enough nodes to allocate all shard replicas To accomplish this, complete the following steps: -:::::::{applies-switch} - -::::::{applies-item} { ess: } -You can get the replica shards assigned by adding an availability zone. This action increases the number of data nodes in the {{es}} cluster so that the replica shards can be assigned. You achieve this by editing your deployment. +1. [Determine which data tier needs more capacity](#determine-data-tier) to identify the tier where shards need to be allocated. +1. [Resize your deployment](#resize-deployment) to add capacity and accommodate all shard replicas. +1. [Check and adjust the index replicas limit](#adjust-index-replica-limit) to determine the current value and reduce it if needed. -This procedure includes steps you complete in {{kib}}, as well as steps using either [API console](/explore-analyze/query-filter/tools/console.md) or direct [Elasticsearch API](elasticsearch://reference/elasticsearch/rest-apis/index.md) calls. +## Determine which data tier needs more capacity [determine-data-tier] -First, you need to discover which tier an index is targeting for assignment. +You can run the following step using either [API console](/explore-analyze/query-filter/tools/console.md) or direct [Elasticsearch API](elasticsearch://reference/elasticsearch/rest-apis/index.md) calls. -To inspect which tier an index is targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: +To determine which tiers an index's shards can be allocated to, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: ```console GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings @@ -43,108 +42,48 @@ The response looks like this: } ``` -1. Represents a comma separated list of data tier node roles this index is allowed to be allocated on, the first one in the list being the one with the higher priority i.e. the tier the index is targeting. e.g. in this example the tier preference is `data_warm,data_hot` so the index is targeting the `warm` tier and more nodes with the `data_warm` role are needed in the {{es}} cluster. - - -Now that you know the tier, you want to increase the number of nodes in that tier so that the replicas can be allocated. To do this you can either increase the size per zone to increase the number of nodes in the availability zone(s) you were already using, or increase the number of availability zones. - -In {{kib}}, go to the deployment’s landing page by clicking on the three horizontal bars on the top left of the screen and choosing **Manage this deployment**. On that page click the **Manage** button, and choose **Edit deployment**. Note that you must be logged in to [https://cloud.elastic.co/](https://cloud.elastic.co/) in order to do this. In the {{es}} section, find the tier where the replica shards could not be assigned. - -:::{image} /troubleshoot/images/elasticsearch-reference-ess-advanced-config-data-tiers.png -:alt: {{kib}} Console -:screenshot: -::: - -* Option 1: Increase the size per zone - - * Look at the values in the **Size per zone** drop down. One node is created in each zone for every 64 GB of RAM you select here. If you currently have 64 GB RAM or less selected, you have one node in each zone. If you select 128 GB RAM, you will get 2 nodes per zone. If you select 192 GB RAM, you will get 3 nodes per zone, and so on. If the value is less than the maximum possible, you can choose a higher value for that tier to add more nodes. - -* Option 2: Increase the number of availability zones +1. Represents a comma-separated list of data tier node roles this index is allowed to be allocated on. The first tier in the list has the highest priority and is the tier the index is targeting. In this example, the tier preference is `data_warm,data_hot`, so the index is targeting the `warm` tier. If the warm tier lacks capacity, the index will fall back to the `data_hot` tier. - * Find the **Availability zones** selection. If it is less than 3, you can select a higher number of availability zones for that tier. -If it is not possible to increase the size per zone or the number of availability zones, you can reduce the number of replicas of your index data. You achieve this by inspecting the [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-number-of-replicas) index setting index setting and decreasing the configured value. +## Resize your deployment [resize-deployment] -1. Inspect the [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-number-of-replicas) index setting. +After you've identified the tier that needs more capacity, you can resize your deployment to distribute the shard load and allow previously unassigned shards to be allocated. - ```console - GET /my-index-000001/_settings/index.number_of_replicas - ``` - - The response looks like this: - - ```console-result - { - "my-index-000001" : { - "settings" : { - "index" : { - "number_of_replicas" : "2" <1> - } - } - } - } - ``` - - 1. Represents the currently configured value for the number of replica shards required for the index - -1. Use the `_cat/nodes` API to find the number of nodes in the target tier: - - ```console - GET /_cat/nodes?h=node.role - ``` - - The response looks like this, containing one row per node: - - ```console-result - himrst - mv - himrst - ``` +:::::::{applies-switch} - You can count the rows containing the letter representing the target tier to know how many nodes you have. See [Query parameters](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) for details. The example above has two rows containing `h`, so there are two nodes in the hot tier. +::::::{applies-item} { ess:, ece: } -1. [Decrease](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of replica shards required for this index. As replica shards cannot reside on the same node as primary shards for [high availability](../../deploy-manage/production-guidance/availability-and-resilience.md), the new value needs to be less than or equal to the number of nodes found above minus one. Since the example above found 2 nodes in the hot tier, the maximum value for `index.number_of_replicas` is 1. +You can either increase the size per zone to increase the number of nodes in the availability zone(s) you were already using, or increase the number of availability zones. - ```console - PUT /my-index-000001/_settings - { - "index" : { - "number_of_replicas" : 1 <1> - } - } - ``` +1. In {{kib}}, open your deployment’s navigation menu (placed under the Elastic logo in the upper left corner) and go to **Manage this deployment**. +1. From the right hand side, click to expand the **Manage** dropdown button and select **Edit deployment** from the list of options. +1. On the **Edit** page, click on **+ Add Capacity** for the tier you identified you need to enable in your deployment. Choose the desired size and availability zones for the new tier. +1. Navigate to the bottom of the page and click the **Save** button. - 1. The new value for the `index.number_of_replicas` index configuration is decreased from the previous value of `2` to `1`. It can be set as low as 0 but configuring it to 0 for indices other than [searchable snapshot indices](../../deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md) may lead to temporary availability loss during node restarts or permanent data loss in case of data corruption. :::::: ::::::{applies-item} { self: } -To get the replica shards assigned, you can add more nodes to your {{es}} cluster and assign the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. +Add more nodes to your {{es}} cluster and assign the index’s target tier [node role](/manage-data/lifecycle/data-tiers.md#configure-data-tiers-on-premise) to the new nodes, by adjusting the configuration in `elasticsearch.yml`. -To inspect which tier an index is targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: +:::::: -```console -GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings -``` +::::::{applies-item} { eck: } +Add more nodes to your {{es}} cluster and assign the index’s target tier [node role](/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md#change-node-role) to the new nodes, by adjusting the [node configuration](/deploy-manage/deploy/cloud-on-k8s/node-configuration.md) in the `spec` section of your {{es}} resource manifest. +:::::: -The response looks like this: +::::::: +:::{include} /deploy-manage/_snippets/autoops-callout-with-ech.md +::: -```console-result -{ - "my-index-000001": { - "settings": { - "index.routing.allocation.include._tier_preference": "data_warm,data_hot" <1> - } - } -} -``` -1. Represents a comma separated list of data tier node roles this index is allowed to be allocated on, the first one in the list being the one with the higher priority i.e. the tier the index is targeting. e.g. in this example the tier preference is `data_warm,data_hot` so the index is targeting the `warm` tier and more nodes with the `data_warm` role are needed in the {{es}} cluster. +## Check and adjust the index replicas limit [adjust-index-replica-limit] -Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspect the [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-number-of-replicas) index setting and decrease the configured value: +If it is not possible to increase capacity by resizing your deployment, you can reduce the number of replicas of your index data. You achieve this by inspecting the [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-number-of-replicas) index setting index setting and decreasing the configured value. -1. Inspect the [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-number-of-replicas) index setting for the index with unassigned replica shards: + +1. Use the [get index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.number_of_replicas` index setting. ```console GET /my-index-000001/_settings/index.number_of_replicas @@ -166,7 +105,7 @@ Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspec 1. Represents the currently configured value for the number of replica shards required for the index -2. Use the `_cat/nodes` API to find the number of nodes in the target tier: +1. Use the [`_cat/nodes`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) API to find the number of nodes in the target tier: ```console GET /_cat/nodes?h=node.role @@ -182,7 +121,7 @@ Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspec You can count the rows containing the letter representing the target tier to know how many nodes you have. See [Query parameters](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) for details. The example above has two rows containing `h`, so there are two nodes in the hot tier. -3. [Decrease](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of replica shards required for this index. As replica shards cannot reside on the same node as primary shards for [high availability](../../deploy-manage/production-guidance/availability-and-resilience.md), the new value needs to be less than or equal to the number of nodes found above minus one. Since the example above found 2 nodes in the hot tier, the maximum value for `index.number_of_replicas` is 1. +1. Use the [update index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) API to decrease the value for the total number of replica shards required for this index. As replica shards cannot reside on the same node as primary shards for [high availability](../../deploy-manage/production-guidance/availability-and-resilience.md), the new value needs to be less than or equal to the number of nodes found above minus one. Since the example above found 2 nodes in the hot tier, the maximum value for `index.number_of_replicas` is 1. ```console PUT /my-index-000001/_settings @@ -194,9 +133,11 @@ Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspec ``` 1. The new value for the `index.number_of_replicas` index configuration is decreased from the previous value of `2` to `1`. It can be set as low as 0 but configuring it to 0 for indices other than [searchable snapshot indices](../../deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md) may lead to temporary availability loss during node restarts or permanent data loss in case of data corruption. -:::::: -::::::: + +Reduce the `index.number_of_replicas` index setting. + + :::{include} /deploy-manage/_snippets/autoops-callout-with-ech.md ::: From bf835c6ac02018893bcfb723ce619cc047c8796e Mon Sep 17 00:00:00 2001 From: Vlada Chirmicci Date: Tue, 6 Jan 2026 16:45:10 +0000 Subject: [PATCH 09/10] Add review suggestions for the Increase the disk capacity of data nodes doc --- .../increase-capacity-data-node.md | 78 ++++++++++++------- 1 file changed, 50 insertions(+), 28 deletions(-) diff --git a/troubleshoot/elasticsearch/increase-capacity-data-node.md b/troubleshoot/elasticsearch/increase-capacity-data-node.md index 40bc079845..3bf78ebaba 100644 --- a/troubleshoot/elasticsearch/increase-capacity-data-node.md +++ b/troubleshoot/elasticsearch/increase-capacity-data-node.md @@ -10,34 +10,25 @@ products: # Increase the disk capacity of data nodes [increase-capacity-data-node] -:::::::{applies-switch} +Disk capacity pressures may cause index failures, unassigned shards, and cluster instability. -::::::{applies-item} { ess:, ece: } +{{es}} uses [disk-based shard allocation watermarks](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#disk-based-shard-allocation) to manage disk space on nodes, which can block allocation or indexing when nodes run low on disk space. -:::{warning} -:applies_to: ece: -In ECE, resizing is limited by your [allocator capacity](/deploy-manage/deploy/cloud-enterprise/ece-manage-capacity.md). -::: +To increase the disk capacity of the data nodes in your cluster, complete these steps: -To increase the disk capacity of the data nodes in your cluster: +1. [Estimate how much disk capacity you need](#estimate-required-capacity). +1. [Increase the disk capacity](#increase-disk-capacity-of-data-nodes). -1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body) or ECE Cloud UI. -1. On the home page, find your deployment and select **Manage**. -1. Go to **Actions** > **Edit deployment** and check that autoscaling is enabled. Adjust the **Enable Autoscaling for** dropdown menu as needed and select **Save**. -1. If autoscaling is successful, the cluster returns to a `healthy` status. -If the cluster is still out of disk, check if autoscaling has reached its set limits and [update your autoscaling settings](/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md#ec-autoscaling-update). -:::::: -::::::{applies-item} { self: } -To increase the data node capacity in your cluster, you need to calculate the amount of extra disk space needed. +## Estimate the amount of required disk capacity [estimate-required-capacity] -1. First, retrieve the relevant disk thresholds that will indicate how much space should be available. The relevant thresholds are the [high watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high) for all the tiers apart from the frozen one and the [frozen flood stage watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-flood-stage-frozen) for the frozen tier. The following example demonstrates disk shortage in the hot tier, so we will only retrieve the high watermark: +1. Retrieve the relevant disk thresholds that indicate how much space should be available. The relevant thresholds are the [high watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high) for all the tiers apart from the frozen one and the [frozen flood stage watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-flood-stage-frozen) for the frozen tier. The following example demonstrates disk shortage in the hot tier, so we will only retrieve the high watermark: ```console GET _cluster/settings?include_defaults&filter_path=*.cluster.routing.allocation.disk.watermark.high* ``` - The response will look like this: + The response looks like this: ```console-result { @@ -58,33 +49,64 @@ To increase the data node capacity in your cluster, you need to calculate the am } ``` - The above means that in order to resolve the disk shortage we need to either drop our disk usage below the 90% or have more than 150GB available, read more on how this threshold works [here](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high). + The above means that in order to resolve the disk shortage, disk usage must drop below the 90% or have more than 150GB available. Read more on how this threshold works [here](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high). -1. The next step is to find out the current disk usage, this will indicate how much extra space is needed. For simplicity, our example has one node, but you can apply the same for every node over the relevant threshold. +1. Find the current disk usage, which in turn indicates how much extra space is required. For simplicity, our example has one node, but you can apply the same for every node over the relevant threshold. ```console GET _cat/allocation?v&s=disk.avail&h=node,disk.percent,disk.avail,disk.total,disk.used,disk.indices,shards ``` - The response will look like this: + The response looks like this: ```console-result node disk.percent disk.avail disk.total disk.used disk.indices shards instance-0000000000 91 4.6gb 35gb 31.1gb 29.9gb 111 ``` -1. The high watermark configuration indicates that the disk usage needs to drop below 90%. To achieve this, 2 things are possible: +In this scenario, the high watermark configuration indicates that the disk usage needs to drop below 90%, while the current disk usage is 91%. - * to add an extra data node to the cluster (this requires that you have more than one shard in your cluster), or - * to extend the disk space of the current node by approximately 20% to allow this node to drop to 70%. This will give enough space to this node to not run out of space soon. -1. In the case of adding another data node, the cluster will not recover immediately. It might take some time to relocate some shards to the new node. You can check the progress here: +## Increase the disk capacity of your data nodes [increase-disk-capacity-of-data-nodes] - ```console - GET /_cat/shards?v&h=state,node&s=state - ``` +Here are the most common ways to increase disk capacity: + +* You can expand the disk space of the current nodes (by replacing your nodes with ones with higher capacity). +* You can add extra data nodes to your cluster (to increase capacity for the data tier that might be short of disk). + +When you add another data node, the cluster doesn't recover immediately and it might take some time until shards are relocated to the new node. +You can check the progress here: + +```console +GET /_cat/shards?v&h=state,node&s=state +``` + +If in the response the shards' state is `RELOCATING`, it means that shards are still moving. Wait until all shards turn to `STARTED` or until the health disk indicator turns to `green`. + +:::::::{applies-switch} + +::::::{applies-item} { ess:, ece: } + +:::{warning} +:applies_to: ece: +In ECE, resizing is limited by your [allocator capacity](/deploy-manage/deploy/cloud-enterprise/ece-manage-capacity.md). +::: + +To increase the disk capacity of the data nodes in your cluster: + +1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body) or ECE Cloud UI. +1. On the home page, find your deployment and select **Manage**. +1. Go to **Actions** > **Edit deployment** and check that autoscaling is enabled. Adjust the **Enable Autoscaling for** dropdown menu as needed and select **Save**. +1. If autoscaling is successful, the cluster returns to a `healthy` status. +If the cluster is still out of disk, check if autoscaling has reached its set limits and [update your autoscaling settings](/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md#ec-autoscaling-update). + +You can also add more capacity by adding more nodes to your cluster and targeting the data tier that may be short of disk. For more information, refer to [](/troubleshoot/elasticsearch/add-tier.md). + +:::::: + +::::::{applies-item} { self: } +To increase the data node capacity in your cluster, you can [add more nodes](/deploy-manage/maintenance/add-and-remove-elasticsearch-nodes.md) to the cluster. - If in the response the shards' state is `RELOCATING`, it means that shards are still moving. Wait until all shards turn to `STARTED` or until the health disk indicator turns to `green`. :::::: ::::::{applies-item} { eck: } From f657a0ac9775b8cd002b502df8e9f0f130b2c24a Mon Sep 17 00:00:00 2001 From: Vlada Chirmicci Date: Tue, 6 Jan 2026 17:19:11 +0000 Subject: [PATCH 10/10] Add ECK steps to increase capacity in the Total number of shards for an index on a single node exceeded page --- .../elasticsearch/increase-shard-limit.md | 74 ++++++++++++++++++- 1 file changed, 72 insertions(+), 2 deletions(-) diff --git a/troubleshoot/elasticsearch/increase-shard-limit.md b/troubleshoot/elasticsearch/increase-shard-limit.md index ac6a24da8a..bdfae49bf2 100644 --- a/troubleshoot/elasticsearch/increase-shard-limit.md +++ b/troubleshoot/elasticsearch/increase-shard-limit.md @@ -113,12 +113,82 @@ To enable a new tier in your {{ech}} deployment, you edit the deployment topolog :::::: ::::::{applies-item} { self: } -Add more nodes to your {{es}} cluster and assign the index’s target tier [node role](/manage-data/lifecycle/data-tiers.md#configure-data-tiers-on-premise) to the new nodes, by adjusting the configuration in `elasticsearch.yml`. +[Add more nodes](/deploy-manage/maintenance/add-and-remove-elasticsearch-nodes.md) to your {{es}} cluster and assign the index’s target tier [node role](/manage-data/lifecycle/data-tiers.md#configure-data-tiers-on-premise) to the new nodes, by adjusting the configuration in `elasticsearch.yml`. :::::: ::::::{applies-item} { eck: } -Add more nodes to your {{es}} cluster and assign the index’s target tier [node role](/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md#change-node-role) to the new nodes, by adjusting the [node configuration](/deploy-manage/deploy/cloud-on-k8s/node-configuration.md) in the `spec` section of your {{es}} resource manifest. +To increase the disk capacity of data nodes in your {{eck}} cluster, you can either add more data nodes or increase the storage size of existing nodes. + +**Option 1: Add more data nodes** + +1. Update the `count` field in your data node NodeSet to add more nodes: + + ```yaml subs=true + apiVersion: elasticsearch.k8s.elastic.co/v1 + kind: Elasticsearch + metadata: + name: quickstart + spec: + version: {{version.stack}} + nodeSets: + - name: data-nodes + count: 5 # Increase from previous count + config: + node.roles: ["data"] + volumeClaimTemplates: + - metadata: + name: elasticsearch-data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Gi + ``` + +1. Apply the changes: + + ```sh + kubectl apply -f your-elasticsearch-manifest.yaml + ``` + + ECK automatically creates the new nodes and {{es}} will relocate shards to balance the load. You can monitor the progress using: + + ```console + GET /_cat/shards?v&h=state,node&s=state + ``` + +**Option 2: Increase storage size of existing nodes** + +1. If your storage class supports [volume expansion](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims), you can increase the storage size in the `volumeClaimTemplates`: + + ```yaml subs=true + apiVersion: elasticsearch.k8s.elastic.co/v1 + kind: Elasticsearch + metadata: + name: quickstart + spec: + version: {{version.stack}} + nodeSets: + - name: data-nodes + count: 3 + config: + node.roles: ["data"] + volumeClaimTemplates: + - metadata: + name: elasticsearch-data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 200Gi # Increased from previous size + ``` + +1. Apply the changes. If the volume driver supports `ExpandInUsePersistentVolumes`, the filesystem will be resized online without restarting {{es}}. Otherwise, you may need to manually delete the Pods after the resize so they can be recreated with the expanded filesystem. + +For more information, refer to [](/deploy-manage/deploy/cloud-on-k8s/update-deployments.md) and [](/deploy-manage/deploy/cloud-on-k8s/volume-claim-templates.md). ::::::