-
-
-
-## What will you do as an ambassador?
-
-As an ambassador, you’ll play a vital role in sharing best practices for using containers in AI workflows, advocating
-for open-source tools for AI model training and inference, and helping the community use `dstack` with
-various cloud providers, data centers, and GPU vendors.
-
-Your contributions might include writing technical blog posts, delivering talks, organizing `dstack` meetups, and
-championing the open AI ecosystem within the broader community.
-
-## Who is the program for?
-
-Whether you’re new to `dstack` or already experienced, the ambassador program is open to anyone passionate
-about open-source AI, eager to share knowledge, and excited to engage with the AI community.
-
-## How do we support ambassadors?
-
-At `dstack`, we are committed to supporting ambassadors through recognition, amplifying their content, and providing
-cloud GPU credits to power their projects.
-
-## How to apply?
-
-If you’re interested in becoming an ambassador, fill out a quick form with details about
-yourself and your experience. We’ll reach out with a starter kit and next steps.
-
-
- Get involved
-
-
-Have questions? Reach out via [Discord](https://discord.gg/u8SmfwPpMd)!
-
-> 💜 In the meantime, we’re thrilled to
-> welcome [Park Chansung](https://x.com/algo_diver), the
-> first `dstack` ambassador.
diff --git a/docs/blog/archive/efa.md b/docs/blog/archive/efa.md
deleted file mode 100644
index 6841cd976b..0000000000
--- a/docs/blog/archive/efa.md
+++ /dev/null
@@ -1,173 +0,0 @@
----
-title: Efficient distributed training with AWS EFA
-date: 2025-02-20
-description: "The latest release of dstack allows you to use AWS EFA for your distributed training tasks."
-slug: efa
-image: https://dstack.ai/static-assets/static-assets/images/distributed-training-with-aws-efa-v2.png
-categories:
- - Cloud fleets
----
-
-# Efficient distributed training with AWS EFA
-
-[Amazon Elastic Fabric Adapter (EFA)](https://aws.amazon.com/hpc/efa/) is a high-performance network interface designed for AWS EC2 instances, enabling
-ultra-low latency and high-throughput communication between nodes. This makes it an ideal solution for scaling
-distributed training workloads across multiple GPUs and instances.
-
-With the latest release of `dstack`, you can now leverage AWS EFA to supercharge your distributed training tasks.
-
-
-
-
-
-## About EFA
-
-AWS EFA delivers up to 400 Gbps of bandwidth, enabling lightning-fast GPU-to-GPU communication across nodes. By
-bypassing the kernel and providing direct network access, EFA minimizes latency and maximizes throughput. Its native
-integration with the `nccl` library ensures optimal performance for large-scale distributed training.
-
-With EFA, you can scale your training tasks to thousands of nodes.
-
-To use AWS EFA with `dstack`, follow these steps to run your distributed training tasks.
-
-## Configure the backend
-
-Before using EFA, ensure the `aws` backend is properly configured.
-
-If you're using P4 or P5 instances with multiple
-network interfaces, you’ll need to disable public IPs. Note, the `dstack`
-server in this case should have access to the private subnet of the VPC.
-
-You’ll also need to specify an AMI that includes the GDRCopy drivers. For example, you can use the
-[AWS Deep Learning Base GPU AMI](https://aws.amazon.com/releasenotes/aws-deep-learning-base-gpu-ami-ubuntu-22-04/).
-
-Here’s an example backend configuration:
-
-diff --git a/docs/examples/clusters/efa/index.md b/docs/examples/clusters/aws/index.md similarity index 100% rename from docs/examples/clusters/efa/index.md rename to docs/examples/clusters/aws/index.md diff --git a/examples/clusters/efa/README.md b/examples/clusters/aws/README.md similarity index 90% rename from examples/clusters/efa/README.md rename to examples/clusters/aws/README.md index 98198a8388..f7cb622704 100644 --- a/examples/clusters/efa/README.md +++ b/examples/clusters/aws/README.md @@ -1,4 +1,4 @@ -# AWS EFA +# AWS In this guide, we’ll walk through how to run high-performance distributed training on AWS using [Amazon Elastic Fabric Adapter (EFA)](https://aws.amazon.com/hpc/efa/) with `dstack`. @@ -37,11 +37,11 @@ projects: Once your backend is ready, define a fleet configuration. -