diff --git a/troubleshoot/elasticsearch/increase-capacity-data-node.md b/troubleshoot/elasticsearch/increase-capacity-data-node.md index 787f3b77f0..3bf78ebaba 100644 --- a/troubleshoot/elasticsearch/increase-capacity-data-node.md +++ b/troubleshoot/elasticsearch/increase-capacity-data-node.md @@ -4,65 +4,31 @@ mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/increase-capacity-data-node.html applies_to: stack: - deployment: - eck: - ess: - ece: - self: products: - id: elasticsearch --- # Increase the disk capacity of data nodes [increase-capacity-data-node] -:::::::{tab-set} +Disk capacity pressures may cause index failures, unassigned shards, and cluster instability. -::::::{tab-item} {{ech}} -In order to increase the disk capacity of the data nodes in your cluster: +{{es}} uses [disk-based shard allocation watermarks](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#disk-based-shard-allocation) to manage disk space on nodes, which can block allocation or indexing when nodes run low on disk space. -1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Hosted deployments** panel, click the gear under the `Manage deployment` column that corresponds to the name of your deployment. -3. If autoscaling is available but not enabled, enable it. You can do this by clicking the button `Enable autoscaling` on a banner like the one below: +To increase the disk capacity of the data nodes in your cluster, complete these steps: - :::{image} /troubleshoot/images/elasticsearch-reference-autoscaling_banner.png - :alt: Autoscaling banner - :screenshot: - ::: +1. [Estimate how much disk capacity you need](#estimate-required-capacity). +1. [Increase the disk capacity](#increase-disk-capacity-of-data-nodes). - Or you can go to `Actions > Edit deployment`, check the checkbox `Autoscale` and click `save` at the bottom of the page. - :::{image} /troubleshoot/images/elasticsearch-reference-enable_autoscaling.png - :alt: Enabling autoscaling - :screenshot: - ::: +## Estimate the amount of required disk capacity [estimate-required-capacity] -4. If autoscaling has succeeded the cluster should return to `healthy` status. If the cluster is still out of disk, check if autoscaling has reached its limits. You will be notified about this by the following banner: - - :::{image} /troubleshoot/images/elasticsearch-reference-autoscaling_limits_banner.png - :alt: Autoscaling banner - :screenshot: - ::: - - or you can go to `Actions > Edit deployment` and look for the label `LIMIT REACHED` as shown below: - - :::{image} /troubleshoot/images/elasticsearch-reference-reached_autoscaling_limits.png - :alt: Autoscaling limits reached - :screenshot: - ::: - - If you are seeing the banner click `Update autoscaling settings` to go to the `Edit` page. Otherwise, you are already in the `Edit` page, click `Edit settings` to increase the autoscaling limits. After you perform the change click `save` at the bottom of the page. -:::::: - -::::::{tab-item} Self-managed -In order to increase the data node capacity in your cluster, you will need to calculate the amount of extra disk space needed. - -1. First, retrieve the relevant disk thresholds that will indicate how much space should be available. The relevant thresholds are the [high watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high) for all the tiers apart from the frozen one and the [frozen flood stage watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-flood-stage-frozen) for the frozen tier. The following example demonstrates disk shortage in the hot tier, so we will only retrieve the high watermark: +1. Retrieve the relevant disk thresholds that indicate how much space should be available. The relevant thresholds are the [high watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high) for all the tiers apart from the frozen one and the [frozen flood stage watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-flood-stage-frozen) for the frozen tier. The following example demonstrates disk shortage in the hot tier, so we will only retrieve the high watermark: ```console GET _cluster/settings?include_defaults&filter_path=*.cluster.routing.allocation.disk.watermark.high* ``` - The response will look like this: + The response looks like this: ```console-result { @@ -83,33 +49,138 @@ In order to increase the data node capacity in your cluster, you will need to ca } ``` - The above means that in order to resolve the disk shortage we need to either drop our disk usage below the 90% or have more than 150GB available, read more on how this threshold works [here](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high). + The above means that in order to resolve the disk shortage, disk usage must drop below the 90% or have more than 150GB available. Read more on how this threshold works [here](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high). -2. The next step is to find out the current disk usage, this will indicate how much extra space is needed. For simplicity, our example has one node, but you can apply the same for every node over the relevant threshold. +1. Find the current disk usage, which in turn indicates how much extra space is required. For simplicity, our example has one node, but you can apply the same for every node over the relevant threshold. ```console GET _cat/allocation?v&s=disk.avail&h=node,disk.percent,disk.avail,disk.total,disk.used,disk.indices,shards ``` - The response will look like this: + The response looks like this: ```console-result node disk.percent disk.avail disk.total disk.used disk.indices shards instance-0000000000 91 4.6gb 35gb 31.1gb 29.9gb 111 ``` -3. The high watermark configuration indicates that the disk usage needs to drop below 90%. To achieve this, 2 things are possible: +In this scenario, the high watermark configuration indicates that the disk usage needs to drop below 90%, while the current disk usage is 91%. + + +## Increase the disk capacity of your data nodes [increase-disk-capacity-of-data-nodes] + +Here are the most common ways to increase disk capacity: + +* You can expand the disk space of the current nodes (by replacing your nodes with ones with higher capacity). +* You can add extra data nodes to your cluster (to increase capacity for the data tier that might be short of disk). + +When you add another data node, the cluster doesn't recover immediately and it might take some time until shards are relocated to the new node. +You can check the progress here: + +```console +GET /_cat/shards?v&h=state,node&s=state +``` + +If in the response the shards' state is `RELOCATING`, it means that shards are still moving. Wait until all shards turn to `STARTED` or until the health disk indicator turns to `green`. + +:::::::{applies-switch} + +::::::{applies-item} { ess:, ece: } + +:::{warning} +:applies_to: ece: +In ECE, resizing is limited by your [allocator capacity](/deploy-manage/deploy/cloud-enterprise/ece-manage-capacity.md). +::: + +To increase the disk capacity of the data nodes in your cluster: + +1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body) or ECE Cloud UI. +1. On the home page, find your deployment and select **Manage**. +1. Go to **Actions** > **Edit deployment** and check that autoscaling is enabled. Adjust the **Enable Autoscaling for** dropdown menu as needed and select **Save**. +1. If autoscaling is successful, the cluster returns to a `healthy` status. +If the cluster is still out of disk, check if autoscaling has reached its set limits and [update your autoscaling settings](/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md#ec-autoscaling-update). + +You can also add more capacity by adding more nodes to your cluster and targeting the data tier that may be short of disk. For more information, refer to [](/troubleshoot/elasticsearch/add-tier.md). + +:::::: + +::::::{applies-item} { self: } +To increase the data node capacity in your cluster, you can [add more nodes](/deploy-manage/maintenance/add-and-remove-elasticsearch-nodes.md) to the cluster. + +:::::: + +::::::{applies-item} { eck: } +To increase the disk capacity of data nodes in your {{eck}} cluster, you can either add more data nodes or increase the storage size of existing nodes. + +**Option 1: Add more data nodes** + +1. Update the `count` field in your data node NodeSet to add more nodes: + + ```yaml subs=true + apiVersion: elasticsearch.k8s.elastic.co/v1 + kind: Elasticsearch + metadata: + name: quickstart + spec: + version: {{version.stack}} + nodeSets: + - name: data-nodes + count: 5 # Increase from previous count + config: + node.roles: ["data"] + volumeClaimTemplates: + - metadata: + name: elasticsearch-data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Gi + ``` + +1. Apply the changes: - * to add an extra data node to the cluster (this requires that you have more than one shard in your cluster), or - * to extend the disk space of the current node by approximately 20% to allow this node to drop to 70%. This will give enough space to this node to not run out of space soon. + ```sh + kubectl apply -f your-elasticsearch-manifest.yaml + ``` -4. In the case of adding another data node, the cluster will not recover immediately. It might take some time to relocate some shards to the new node. You can check the progress here: + ECK automatically creates the new nodes and {{es}} will relocate shards to balance the load. You can monitor the progress using: ```console GET /_cat/shards?v&h=state,node&s=state ``` - If in the response the shards' state is `RELOCATING`, it means that shards are still moving. Wait until all shards turn to `STARTED` or until the health disk indicator turns to `green`. -:::::: +**Option 2: Increase storage size of existing nodes** + +1. If your storage class supports [volume expansion](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims), you can increase the storage size in the `volumeClaimTemplates`: + + ```yaml subs=true + apiVersion: elasticsearch.k8s.elastic.co/v1 + kind: Elasticsearch + metadata: + name: quickstart + spec: + version: {{version.stack}} + nodeSets: + - name: data-nodes + count: 3 + config: + node.roles: ["data"] + volumeClaimTemplates: + - metadata: + name: elasticsearch-data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 200Gi # Increased from previous size + ``` + +1. Apply the changes. If the volume driver supports `ExpandInUsePersistentVolumes`, the filesystem will be resized online without restarting {{es}}. Otherwise, you may need to manually delete the Pods after the resize so they can be recreated with the expanded filesystem. -::::::: +For more information, refer to [](/deploy-manage/deploy/cloud-on-k8s/update-deployments.md) and [](/deploy-manage/deploy/cloud-on-k8s/volume-claim-templates.md). + +:::::: +::::::: \ No newline at end of file diff --git a/troubleshoot/elasticsearch/increase-cluster-shard-limit.md b/troubleshoot/elasticsearch/increase-cluster-shard-limit.md index 2bad9edb46..ea7ced7a1a 100644 --- a/troubleshoot/elasticsearch/increase-cluster-shard-limit.md +++ b/troubleshoot/elasticsearch/increase-cluster-shard-limit.md @@ -4,11 +4,6 @@ mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/increase-cluster-shard-limit.html applies_to: stack: - deployment: - eck: - ess: - ece: - self: products: - id: elasticsearch --- @@ -19,74 +14,77 @@ products: {{es}} takes advantage of all available resources by distributing data (index shards) among the cluster nodes. -You might want to influence this data distribution by configuring the [`cluster.routing.allocation.total_shards_per_node`](elasticsearch://reference/elasticsearch/index-settings/total-shards-per-node.md#cluster-total-shards-per-node) system setting to restrict the number of shards that can be hosted on a single node in the system, regardless of the index. Various configurations limiting how many shards can be hosted on a single node can lead to shards being unassigned, because the cluster does not have enough nodes to satisfy the configuration. +You can influence the data distribution by configuring the [`cluster.routing.allocation.total_shards_per_node`](elasticsearch://reference/elasticsearch/index-settings/total-shards-per-node.md#cluster-total-shards-per-node) dynamic cluster setting to restrict the number of shards that can be hosted on a single node in the cluster. -To fix this issue, complete the following steps: +In earlier {{es}} versions, `cluster.routing.allocation.total_shards_per_node` is set to `1000`. Reaching that limit causes the following error: `Total number of shards per node has been reached` and requires an adjustment of this cluster setting. -:::::::{tab-set} +Various configurations limiting how many shards can be hosted on a single node can lead to shards being unassigned, because the cluster does not have enough nodes to satisfy the configuration. +To ensure that each node carries a reasonable shard load, you might need to resize your deployment. -::::::{tab-item} {{ech}} -In order to get the shards assigned we’ll need to increase the number of shards that can be collocated on a node in the cluster. We’ll achieve this by inspecting the system-wide `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) and increasing the configured value. +Follow these steps to resolve this issue: -**Use {{kib}}** +1. [Check and adjust the cluster shard limit](#adjust-cluster-shard-limit) to determine the current value and increase it if needed. +1. [Determine which data tier needs more capacity](#determine-data-tier) to identify the tier where shards need to be allocated. +1. [Resize your deployment](#resize-deployment) to add capacity and accommodate additional shards. -1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Hosted deployments** panel, click the name of your deployment. - ::::{note} - If the name of your deployment is disabled your {{kib}} instances might be unhealthy, in which case contact [Elastic Support](https://support.elastic.co). If your deployment doesn’t include {{kib}}, all you need to do is [enable it first](../../deploy-manage/deploy/elastic-cloud/access-kibana.md). - :::: +## Check and adjust the cluster shard limit [adjust-cluster-shard-limit] -3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. +The `cluster.routing.allocation.total_shards_per_node` setting controls the maximum number of shards that can be allocated to each node in a cluster. When this limit is reached, {{es}} cannot assign new shards to that node, leading to unassigned shards in your cluster. - :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png - :alt: {{kib}} Console - :screenshot: - ::: +By checking the current value and increasing it, you allow more shards to be collocated on each node, which might resolve the allocation issue without adding more capacity to your cluster. -4. Inspect the `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings): +You can run the following steps using either [API console](/explore-analyze/query-filter/tools/console.md) or direct [{{es}} API](elasticsearch://reference/elasticsearch/rest-apis/index.md) calls. - ```console - GET /_cluster/settings?flat_settings - ``` +### Check the current setting [check-the-shard-limiting-setting] - The response will look like this: +Use the [get cluster-wide settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) API to inspect the current value of `cluster.routing.allocation.total_shards_per_node`: - ```console-result - { - "persistent": { - "cluster.routing.allocation.total_shards_per_node": "300" <1> - }, - "transient": {} - } - ``` +```console +GET /_cluster/settings?flat_settings +``` - 1. Represents the current configured value for the total number of shards that can reside on one node in the system. +The response looks like this: -5. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) the value for the total number of shards that can be assigned on one node to a higher value: +```console-result +{ + "persistent": { + "cluster.routing.allocation.total_shards_per_node": "300" <1> + }, + "transient": {} +} +``` - ```console - PUT _cluster/settings - { - "persistent" : { - "cluster.routing.allocation.total_shards_per_node" : 400 <1> - } - } - ``` +1. Represents the current configured value for the total number of shards that can reside on one node in the cluster. If the value is null or absent, no explicit limit is configured. - 1. The new value for the system-wide `total_shards_per_node` configuration is increased from the previous value of `300` to `400`. The `total_shards_per_node` configuration can also be set to `null`, which represents no upper bound with regards to how many shards can be collocated on one node in the system. -:::::: +### Increase the setting + +Use the [update the cluster settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) API to increase the value to a higher number that accommodates your workload: + +```console +PUT _cluster/settings +{ + "persistent" : { + "cluster.routing.allocation.total_shards_per_node" : 400 <1> + } +} +``` + +1. The new value for the system-wide `total_shards_per_node` configuration is increased from the previous value of `300` to `400`. The `total_shards_per_node` configuration can also be set to `null`, which represents no upper bound with regards to how many shards can be collocated on one node in the system. + + + +## Determine which data tier needs more capacity [determine-data-tier] -::::::{tab-item} Self-managed -In order to get the shards assigned you can add more nodes to your {{es}} cluster and assign the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. +If increasing the cluster shard limit alone doesn't resolve the issue, or if you want to distribute shards more evenly, you need to identify which [data tier](/manage-data/lifecycle/data-tiers.md) requires additional capacity. -To inspect which tier is an index targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: +Use the [get index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: ```console GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings ``` -The response will look like this: +The response looks like this: ```console-result { @@ -98,42 +96,34 @@ The response will look like this: } ``` -1. Represents a comma separated list of data tier node roles this index is allowed to be allocated on, the first one in the list being the one with the higher priority i.e. the tier the index is targeting. e.g. in this example the tier preference is `data_warm,data_hot` so the index is targeting the `warm` tier and more nodes with the `data_warm` role are needed in the {{es}} cluster. +1. Represents a comma-separated list of data tier node roles this index is allowed to be allocated on. The first tier in the list has the highest priority and is the tier the index is targeting. In this example, the tier preference is `data_warm,data_hot`, so the index is targeting the `warm` tier. If the warm tier lacks capacity, the index will fall back to the `data_hot` tier. -Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspecting the system-wide `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) and increasing the configured value: -1. Inspect the `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) for the index with unassigned shards: - ```console - GET /_cluster/settings?flat_settings - ``` +## Resize your deployment [resize-deployment] - The response will look like this: +After you've identified the tier that needs more capacity, you can resize your deployment to distribute the shard load and allow previously unassigned shards to be allocated. - ```console-result - { - "persistent": { - "cluster.routing.allocation.total_shards_per_node": "300" <1> - }, - "transient": {} - } - ``` +:::::::{applies-switch} - 1. Represents the current configured value for the total number of shards that can reside on one node in the system. +::::::{applies-item} { ess:, ece: } +To enable a new tier in your {{ech}} deployment, you edit the deployment topology to add a new data tier. -2. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) the value for the total number of shards that can be assigned on one node to a higher value: +1. In {{kib}}, open your deployment’s navigation menu (placed under the Elastic logo in the upper left corner) and go to **Manage this deployment**. +1. From the right hand side, click to expand the **Manage** dropdown button and select **Edit deployment** from the list of options. +1. On the **Edit** page, click on **+ Add Capacity** for the tier you identified you need to enable in your deployment. Choose the desired size and availability zones for the new tier. +1. Navigate to the bottom of the page and click the **Save** button. - ```console - PUT _cluster/settings - { - "persistent" : { - "cluster.routing.allocation.total_shards_per_node" : 400 <1> - } - } - ``` +:::::: + +::::::{applies-item} { self: } +Add more nodes to your {{es}} cluster and assign the index’s target tier [node role](/manage-data/lifecycle/data-tiers.md#configure-data-tiers-on-premise) to the new nodes, by adjusting the configuration in `elasticsearch.yml`. + +:::::: - 1. The new value for the system-wide `total_shards_per_node` configuration is increased from the previous value of `300` to `400`. The `total_shards_per_node` configuration can also be set to `null`, which represents no upper bound with regards to how many shards can be collocated on one node in the system. +::::::{applies-item} { eck: } +Add more nodes to your {{es}} cluster and assign the index’s target tier [node role](/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md#change-node-role) to the new nodes, by adjusting the [node configuration](/deploy-manage/deploy/cloud-on-k8s/node-configuration.md) in the `spec` section of your {{es}} resource manifest. :::::: ::::::: diff --git a/troubleshoot/elasticsearch/increase-shard-limit.md b/troubleshoot/elasticsearch/increase-shard-limit.md index 99f9c27b23..bdfae49bf2 100644 --- a/troubleshoot/elasticsearch/increase-shard-limit.md +++ b/troubleshoot/elasticsearch/increase-shard-limit.md @@ -4,11 +4,6 @@ mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/increase-shard-limit.html applies_to: stack: - deployment: - eck: - ess: - ece: - self: products: - id: elasticsearch --- @@ -19,75 +14,75 @@ products: {{es}} takes advantage of all available resources by distributing data (index shards) among the cluster nodes. -You might want to influence this data distribution by configuring the [index.routing.allocation.total_shards_per_node](elasticsearch://reference/elasticsearch/index-settings/total-shards-per-node.md#total-shards-per-node) index setting to a custom value (for example, `1` in case of a highly trafficked index). Various configurations limiting how many shards an index can have located on one node can lead to shards being unassigned, because the cluster does not have enough nodes to satisfy the index configuration. +You can influence this data distribution by configuring the [index.routing.allocation.total_shards_per_node](elasticsearch://reference/elasticsearch/index-settings/total-shards-per-node.md#total-shards-per-node) dynamic index setting to restrict the maximum number of shards from a single index that can be allocated to a node. +For example, in case of a highly trafficked index, the value can be set to `1`. +Various configurations limiting how many shards an index can have located on one node can lead to shards being unassigned, because the cluster does not have enough nodes to satisfy the index configuration. To fix this issue, complete the following steps: -:::::::{tab-set} +1. [Check and adjust the index allocation settings](#adjust-index-allocation-settings) to determine the current value and increase it if needed. +1. [Determine which data tier needs more capacity](#determine-data-tier) to identify the tier where shards need to be allocated. +1. [Resize your deployment](#resize-deployment) to add capacity and accommodate additional shards. -::::::{tab-item} {{ech}} -In order to get the shards assigned we’ll need to increase the number of shards that can be collocated on a node. We’ll achieve this by inspecting the configuration for the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) and increasing the configured value for the indices that have shards unassigned. -**Use {{kib}}** -1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Hosted deployments** panel, click the name of your deployment. +## Check and adjust the index allocation settings [adjust-index-allocation-settings] - ::::{note} - If the name of your deployment is disabled your {{kib}} instances might be unhealthy, in which case contact [Elastic Support](https://support.elastic.co). If your deployment doesn’t include {{kib}}, all you need to do is [enable it first](../../deploy-manage/deploy/elastic-cloud/access-kibana.md). - :::: +The `index.routing.allocation.total_shards_per_node` setting controls the maximum number of shards that can be collocated on a node in your cluster. When this limit is reached, {{es}} cannot assign new shards to that node, leading to unassigned shards in your cluster. -3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. +By checking the current value and increasing it, you allow more shards to be collocated on each node, which might resolve the allocation issue without adding more capacity to your cluster. - :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png - :alt: {{kib}} Console - :screenshot: - ::: +You can run the following steps using either [API console](/explore-analyze/query-filter/tools/console.md) or direct [{{es}} API](elasticsearch://reference/elasticsearch/rest-apis/index.md) calls. -4. Inspect the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) for the index with unassigned shards: +### Check the current index setting [check-the-index-setting] - ```console - GET /my-index-000001/_settings/index.routing.allocation.total_shards_per_node?flat_settings - ``` +Use the [get index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to inspect the `index.routing.allocation.total_shards_per_node` value for the index with unassigned shards: + +```console +GET /my-index-000001/_settings/index.routing.allocation.total_shards_per_node?flat_settings +``` - The response will look like this: +The response looks like this: - ```console-result - { - "my-index-000001": { - "settings": { - "index.routing.allocation.total_shards_per_node": "1" <1> - } - } +```console-result +{ + "my-index-000001": { + "settings": { + "index.routing.allocation.total_shards_per_node": "1" <1> } - ``` + } +} +``` - 1. Represents the current configured value for the total number of shards that can reside on one node for the `my-index-000001` index. +1. Represents the current configured value for the total number of shards that can reside on one node for the `my-index-000001` index. -5. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of shards that can be assigned on one node to a higher value: +### Increase the setting - ```console - PUT /my-index-000001/_settings - { - "index" : { - "routing.allocation.total_shards_per_node" : "2" <1> - } - } - ``` +Use the [update index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) API to increase the value for the total number of shards that can be assigned on a node to a higher value that accommodates your workload: + +```console +PUT /my-index-000001/_settings +{ + "index" : { + "routing.allocation.total_shards_per_node" : "2" <1> + } +} +``` + +1. The new value for the `total_shards_per_node` configuration for the `my-index-000001` index is increased from the previous value of `1` to `2`. The `total_shards_per_node` configuration can also be set to `-1`, which represents no upper bound with regards to how many shards of the same index can reside on one node. - 1. The new value for the `total_shards_per_node` configuration for the `my-index-000001` index is increased from the previous value of `1` to `2`. The `total_shards_per_node` configuration can also be set to `-1`, which represents no upper bound with regards to how many shards of the same index can reside on one node. -:::::: -::::::{tab-item} Self-managed -In order to get the shards assigned you can add more nodes to your {{es}} cluster and assing the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. +## Determine which data tier needs more capacity [determine-data-tier] -To inspect which tier is an index targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: +If increasing the index shard limit alone doesn't resolve the issue, or if you want to distribute shards more evenly, you need to identify which [data tier](/manage-data/lifecycle/data-tiers.md) requires additional capacity. + +Use the [get index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: ```console GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings ``` -The response will look like this: +The response looks like this: ```console-result { @@ -99,43 +94,104 @@ The response will look like this: } ``` -1. Represents a comma separated list of data tier node roles this index is allowed to be allocated on, the first one in the list being the one with the higher priority i.e. the tier the index is targeting. e.g. in this example the tier preference is `data_warm,data_hot` so the index is targeting the `warm` tier and more nodes with the `data_warm` role are needed in the {{es}} cluster. +1. Represents a comma-separated list of data tier node roles this index is allowed to be allocated on. The first tier in the list has the highest priority and is the tier the index is targeting. In this example, the tier preference is `data_warm,data_hot`, so the index is targeting the `warm` tier. If the warm tier lacks capacity, the index will fall back to the `data_hot` tier. +## Resize your deployment [resize-deployment] -Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspecting the configuration for the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) and increasing the configured value will allow more shards to be assigned on the same node. +After you've identified the tier that needs more capacity, you can resize your deployment to distribute the shard load and allow previously unassigned shards to be allocated. -1. Inspect the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) for the index with unassigned shards: +:::::::{applies-switch} - ```console - GET /my-index-000001/_settings/index.routing.allocation.total_shards_per_node?flat_settings - ``` +::::::{applies-item} { ess:, ece: } +To enable a new tier in your {{ech}} deployment, you edit the deployment topology to add a new data tier. - The response will look like this: +1. In {{kib}}, open your deployment’s navigation menu (placed under the Elastic logo in the upper left corner) and go to **Manage this deployment**. +1. From the right hand side, click to expand the **Manage** dropdown button and select **Edit deployment** from the list of options. +1. On the **Edit** page, click on **+ Add Capacity** for the tier you identified you need to enable in your deployment. Choose the desired size and availability zones for the new tier. +1. Navigate to the bottom of the page and click the **Save** button. - ```console-result - { - "my-index-000001": { - "settings": { - "index.routing.allocation.total_shards_per_node": "1" <1> - } - } - } +:::::: + +::::::{applies-item} { self: } +[Add more nodes](/deploy-manage/maintenance/add-and-remove-elasticsearch-nodes.md) to your {{es}} cluster and assign the index’s target tier [node role](/manage-data/lifecycle/data-tiers.md#configure-data-tiers-on-premise) to the new nodes, by adjusting the configuration in `elasticsearch.yml`. + +:::::: + +::::::{applies-item} { eck: } +To increase the disk capacity of data nodes in your {{eck}} cluster, you can either add more data nodes or increase the storage size of existing nodes. + +**Option 1: Add more data nodes** + +1. Update the `count` field in your data node NodeSet to add more nodes: + + ```yaml subs=true + apiVersion: elasticsearch.k8s.elastic.co/v1 + kind: Elasticsearch + metadata: + name: quickstart + spec: + version: {{version.stack}} + nodeSets: + - name: data-nodes + count: 5 # Increase from previous count + config: + node.roles: ["data"] + volumeClaimTemplates: + - metadata: + name: elasticsearch-data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Gi ``` - 1. Represents the current configured value for the total number of shards that can reside on one node for the `my-index-000001` index. +1. Apply the changes: -2. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the total number of shards that can be assigned on one node or reset the value to unbounded (`-1`): + ```sh + kubectl apply -f your-elasticsearch-manifest.yaml + ``` + + ECK automatically creates the new nodes and {{es}} will relocate shards to balance the load. You can monitor the progress using: ```console - PUT /my-index-000001/_settings - { - "index" : { - "routing.allocation.total_shards_per_node" : -1 - } - } + GET /_cat/shards?v&h=state,node&s=state ``` + +**Option 2: Increase storage size of existing nodes** + +1. If your storage class supports [volume expansion](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims), you can increase the storage size in the `volumeClaimTemplates`: + + ```yaml subs=true + apiVersion: elasticsearch.k8s.elastic.co/v1 + kind: Elasticsearch + metadata: + name: quickstart + spec: + version: {{version.stack}} + nodeSets: + - name: data-nodes + count: 3 + config: + node.roles: ["data"] + volumeClaimTemplates: + - metadata: + name: elasticsearch-data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 200Gi # Increased from previous size + ``` + +1. Apply the changes. If the volume driver supports `ExpandInUsePersistentVolumes`, the filesystem will be resized online without restarting {{es}}. Otherwise, you may need to manually delete the Pods after the resize so they can be recreated with the expanded filesystem. + +For more information, refer to [](/deploy-manage/deploy/cloud-on-k8s/update-deployments.md) and [](/deploy-manage/deploy/cloud-on-k8s/volume-claim-templates.md). :::::: + ::::::: :::{include} /deploy-manage/_snippets/autoops-callout-with-ech.md ::: \ No newline at end of file diff --git a/troubleshoot/elasticsearch/increase-tier-capacity.md b/troubleshoot/elasticsearch/increase-tier-capacity.md index a3492763e3..b9aa0b51e3 100644 --- a/troubleshoot/elasticsearch/increase-tier-capacity.md +++ b/troubleshoot/elasticsearch/increase-tier-capacity.md @@ -4,11 +4,6 @@ mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/increase-tier-capacity.html applies_to: stack: - deployment: - eck: - ess: - ece: - self: products: - id: elasticsearch --- @@ -21,35 +16,21 @@ If a warning is encountered with not enough nodes to allocate all shard replicas To accomplish this, complete the following steps: -:::::::{tab-set} +1. [Determine which data tier needs more capacity](#determine-data-tier) to identify the tier where shards need to be allocated. +1. [Resize your deployment](#resize-deployment) to add capacity and accommodate all shard replicas. +1. [Check and adjust the index replicas limit](#adjust-index-replica-limit) to determine the current value and reduce it if needed. -::::::{tab-item} {{ech}} -One way to get the replica shards assigned is to add an availability zone. This will increase the number of data nodes in the {{es}} cluster so that the replica shards can be assigned. This can be done by editing your deployment. But first you need to discover which tier an index is targeting for assignment. Do this using {{kib}}. +## Determine which data tier needs more capacity [determine-data-tier] -**Use {{kib}}** +You can run the following step using either [API console](/explore-analyze/query-filter/tools/console.md) or direct [Elasticsearch API](elasticsearch://reference/elasticsearch/rest-apis/index.md) calls. -1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Hosted deployments** panel, click the name of your deployment. - - ::::{note} - If the name of your deployment is disabled your {{kib}} instances might be unhealthy, in which case contact [Elastic Support](https://support.elastic.co). If your deployment doesn’t include {{kib}}, all you need to do is [enable it first](../../deploy-manage/deploy/elastic-cloud/access-kibana.md). - :::: - -3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**. - - :::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png - :alt: {{kib}} Console - :screenshot: - ::: - - -To inspect which tier an index is targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: +To determine which tiers an index's shards can be allocated to, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: ```console GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings ``` -The response will look like this: +The response looks like this: ```console-result { @@ -61,113 +42,54 @@ The response will look like this: } ``` -1. Represents a comma separated list of data tier node roles this index is allowed to be allocated on, the first one in the list being the one with the higher priority i.e. the tier the index is targeting. e.g. in this example the tier preference is `data_warm,data_hot` so the index is targeting the `warm` tier and more nodes with the `data_warm` role are needed in the {{es}} cluster. +1. Represents a comma-separated list of data tier node roles this index is allowed to be allocated on. The first tier in the list has the highest priority and is the tier the index is targeting. In this example, the tier preference is `data_warm,data_hot`, so the index is targeting the `warm` tier. If the warm tier lacks capacity, the index will fall back to the `data_hot` tier. -Now that you know the tier, you want to increase the number of nodes in that tier so that the replicas can be allocated. To do this you can either increase the size per zone to increase the number of nodes in the availability zone(s) you were already using, or increase the number of availability zones. Go back to the deployment’s landing page by clicking on the three horizontal bars on the top left of the screen and choosing **Manage this deployment**. On that page click the **Manage** button, and choose **Edit deployment**. Note that you must be logged in to [https://cloud.elastic.co/](https://cloud.elastic.co/) in order to do this. In the {{es}} section, find the tier where the replica shards could not be assigned. -:::{image} /troubleshoot/images/elasticsearch-reference-ess-advanced-config-data-tiers.png -:alt: {{kib}} Console -:screenshot: -::: - -* Option 1: Increase the size per zone - - * Look at the values in the **Size per zone** drop down. One node is created in each zone for every 64 GB of RAM you select here. If you currently have 64 GB RAM or less selected, you have one node in each zone. If you select 128 GB RAM, you will get 2 nodes per zone. If you select 192 GB RAM, you will get 3 nodes per zone, and so on. If the value is less than the maximum possible, you can choose a higher value for that tier to add more nodes. +## Resize your deployment [resize-deployment] -* Option 2: Increase the number of availability zones +After you've identified the tier that needs more capacity, you can resize your deployment to distribute the shard load and allow previously unassigned shards to be allocated. - * Find the **Availability zones** selection. If it is less than 3, you can select a higher number of availability zones for that tier. +:::::::{applies-switch} +::::::{applies-item} { ess:, ece: } -If it is not possible to increase the size per zone or the number of availability zones, you can reduce the number of replicas of your index data. We’ll achieve this by inspecting the [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-number-of-replicas) index setting index setting and decreasing the configured value. +You can either increase the size per zone to increase the number of nodes in the availability zone(s) you were already using, or increase the number of availability zones. -1. Access {{kib}} as described above. -2. Inspect the [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-number-of-replicas) index setting. +1. In {{kib}}, open your deployment’s navigation menu (placed under the Elastic logo in the upper left corner) and go to **Manage this deployment**. +1. From the right hand side, click to expand the **Manage** dropdown button and select **Edit deployment** from the list of options. +1. On the **Edit** page, click on **+ Add Capacity** for the tier you identified you need to enable in your deployment. Choose the desired size and availability zones for the new tier. +1. Navigate to the bottom of the page and click the **Save** button. - ```console - GET /my-index-000001/_settings/index.number_of_replicas - ``` - - The response will look like this: - - ```console-result - { - "my-index-000001" : { - "settings" : { - "index" : { - "number_of_replicas" : "2" <1> - } - } - } - } - ``` - - 1. Represents the currently configured value for the number of replica shards required for the index - -3. Use the `_cat/nodes` API to find the number of nodes in the target tier: - - ```console - GET /_cat/nodes?h=node.role - ``` - - The response will look like this, containing one row per node: - - ```console-result - himrst - mv - himrst - ``` - - You can count the rows containing the letter representing the target tier to know how many nodes you have. See [Query parameters](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) for details. The example above has two rows containing `h`, so there are two nodes in the hot tier. - -4. [Decrease](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of replica shards required for this index. As replica shards cannot reside on the same node as primary shards for [high availability](../../deploy-manage/production-guidance/availability-and-resilience.md), the new value needs to be less than or equal to the number of nodes found above minus one. Since the example above found 2 nodes in the hot tier, the maximum value for `index.number_of_replicas` is 1. +:::::: - ```console - PUT /my-index-000001/_settings - { - "index" : { - "number_of_replicas" : 1 <1> - } - } - ``` +::::::{applies-item} { self: } +Add more nodes to your {{es}} cluster and assign the index’s target tier [node role](/manage-data/lifecycle/data-tiers.md#configure-data-tiers-on-premise) to the new nodes, by adjusting the configuration in `elasticsearch.yml`. - 1. The new value for the `index.number_of_replicas` index configuration is decreased from the previous value of `2` to `1`. It can be set as low as 0 but configuring it to 0 for indices other than [searchable snapshot indices](../../deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md) may lead to temporary availability loss during node restarts or permanent data loss in case of data corruption. :::::: -::::::{tab-item} Self-managed -In order to get the replica shards assigned you can add more nodes to your {{es}} cluster and assign the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. - -To inspect which tier an index is targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: +::::::{applies-item} { eck: } +Add more nodes to your {{es}} cluster and assign the index’s target tier [node role](/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md#change-node-role) to the new nodes, by adjusting the [node configuration](/deploy-manage/deploy/cloud-on-k8s/node-configuration.md) in the `spec` section of your {{es}} resource manifest. +:::::: -```console -GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings -``` +::::::: +:::{include} /deploy-manage/_snippets/autoops-callout-with-ech.md +::: -The response will look like this: -```console-result -{ - "my-index-000001": { - "settings": { - "index.routing.allocation.include._tier_preference": "data_warm,data_hot" <1> - } - } -} -``` +## Check and adjust the index replicas limit [adjust-index-replica-limit] -1. Represents a comma separated list of data tier node roles this index is allowed to be allocated on, the first one in the list being the one with the higher priority i.e. the tier the index is targeting. e.g. in this example the tier preference is `data_warm,data_hot` so the index is targeting the `warm` tier and more nodes with the `data_warm` role are needed in the {{es}} cluster. +If it is not possible to increase capacity by resizing your deployment, you can reduce the number of replicas of your index data. You achieve this by inspecting the [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-number-of-replicas) index setting index setting and decreasing the configured value. -Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspect the [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-number-of-replicas) index setting and decrease the configured value: -1. Inspect the [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#dynamic-index-number-of-replicas) index setting for the index with unassigned replica shards: +1. Use the [get index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.number_of_replicas` index setting. ```console GET /my-index-000001/_settings/index.number_of_replicas ``` - The response will look like this: + The response looks like this: ```console-result { @@ -183,13 +105,13 @@ Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspec 1. Represents the currently configured value for the number of replica shards required for the index -2. Use the `_cat/nodes` API to find the number of nodes in the target tier: +1. Use the [`_cat/nodes`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) API to find the number of nodes in the target tier: ```console GET /_cat/nodes?h=node.role ``` - The response will look like this, containing one row per node: + The response looks like this, containing one row per node: ```console-result himrst @@ -199,7 +121,7 @@ Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspec You can count the rows containing the letter representing the target tier to know how many nodes you have. See [Query parameters](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) for details. The example above has two rows containing `h`, so there are two nodes in the hot tier. -3. [Decrease](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of replica shards required for this index. As replica shards cannot reside on the same node as primary shards for [high availability](../../deploy-manage/production-guidance/availability-and-resilience.md), the new value needs to be less than or equal to the number of nodes found above minus one. Since the example above found 2 nodes in the hot tier, the maximum value for `index.number_of_replicas` is 1. +1. Use the [update index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) API to decrease the value for the total number of replica shards required for this index. As replica shards cannot reside on the same node as primary shards for [high availability](../../deploy-manage/production-guidance/availability-and-resilience.md), the new value needs to be less than or equal to the number of nodes found above minus one. Since the example above found 2 nodes in the hot tier, the maximum value for `index.number_of_replicas` is 1. ```console PUT /my-index-000001/_settings @@ -211,9 +133,11 @@ Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspec ``` 1. The new value for the `index.number_of_replicas` index configuration is decreased from the previous value of `2` to `1`. It can be set as low as 0 but configuring it to 0 for indices other than [searchable snapshot indices](../../deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md) may lead to temporary availability loss during node restarts or permanent data loss in case of data corruption. -:::::: -::::::: + +Reduce the `index.number_of_replicas` index setting. + + :::{include} /deploy-manage/_snippets/autoops-callout-with-ech.md :::