diff --git a/.github/styles/Vocab/ipfs-docs-vocab/accept.txt b/.github/styles/Vocab/ipfs-docs-vocab/accept.txt index 2a772fe38..a107355ab 100644 --- a/.github/styles/Vocab/ipfs-docs-vocab/accept.txt +++ b/.github/styles/Vocab/ipfs-docs-vocab/accept.txt @@ -114,7 +114,7 @@ homebrew hostname HTML HTTPS -identafiability +identifiability Infura interop ipget diff --git a/.github/styles/pln-ignore.txt b/.github/styles/pln-ignore.txt index 34cedd32a..2e10bd1af 100644 --- a/.github/styles/pln-ignore.txt +++ b/.github/styles/pln-ignore.txt @@ -33,6 +33,8 @@ Caddyfile callout callouts cas +cdn('s) +CDN's cdns certbot cid @@ -55,6 +57,7 @@ crowdsourcing crypto(currencies) daos dapps +dClimate data('s) datastore deduplicate @@ -68,6 +71,8 @@ deserialized devs dheeraj dht +dht('s) +DHT's dhts dialable dialback @@ -95,6 +100,8 @@ filestore flatfs flatf[ss] fleek +Fleek's +fleek('s) fqdns gasless geospatial @@ -114,7 +121,7 @@ hostname hostnames html https -identafiability +identifiability infura interop ipfs @@ -136,6 +143,7 @@ keypair keystores kubo Kubo's +Lakhani kubuxu laika lan @@ -189,6 +197,8 @@ nats neocities netlify next.js +nft('s) +NFT's nfts nginx nodejs @@ -208,6 +218,7 @@ pluggable powergate powershell preload +prenegotiated prepended processannounce protobuf @@ -238,6 +249,7 @@ sandboxed satoshi satoshi nakamoto SDKs +se serverless sharding snapshotted @@ -259,6 +271,7 @@ takedown testground testnet toolkits +toolset trustlessly trustlessness uncensorable @@ -279,6 +292,7 @@ vue Vuepress wantlist wantlists +WASM web webpack webpages @@ -298,4 +312,6 @@ youtube IPFS's IPIPs IPIP +Zeeshan +Zelenka _redirects diff --git a/docs/.vuepress/redirects b/docs/.vuepress/redirects index fdf26e8a4..e50b9b1b5 100644 --- a/docs/.vuepress/redirects +++ b/docs/.vuepress/redirects @@ -45,6 +45,7 @@ /how-to/run-ipfs-inside-docker /install/run-ipfs-inside-docker /how-to/ipfs-updater /install/command-line /how-to/websites-on-ipfs/link-a-domain /how-to/websites-on-ipfs/custom-domains +/how-to/websites-on-ipfs/introducing-fleek /how-to/websites-on-ipfs/static-site-generators /how-to/gateway-troubleshooting /how-to/troubleshooting /install/command-line-quick-start/ /how-to/command-line-quick-start /install/js-ipfs/ https://github.com/ipfs/helia/wiki diff --git a/docs/case-studies/arbol.md b/docs/case-studies/arbol.md index cb4146759..71906e6d4 100644 --- a/docs/case-studies/arbol.md +++ b/docs/case-studies/arbol.md @@ -17,7 +17,7 @@ _— Ben Andre, CTO, Arbol_ Arbol logo ::: -[Arbol](https://www.arbolmarket.com/) is a software platform that connects agricultural entities like farmers and other weather-dependent parties with investors and other capital providers to insure and protect against weather-related risks. Arbol's platform sells contracts for parametric weather protection agreements in a marketplace that's an innovative, data-driven approach to risk management, cutting out the usual legacy insurance claims process of making loss assessments on the ground. Instead, Arbol relies on tamper-proof data indexes to determine payouts, and doesn't require a defined loss to be indemnified. Arbol's platform combines parametric weather protection with blockchain-based smart contracts to provide cost-efficient, automated, and user-defined weather-related risk hedging. As with traditional crop insurance and similar legacy products, end users purchase assurance that they'll be financially protected in the case of adverse weather — but with Arbol, these end users are paid automatically if adverse conditions occur, as defined by the contract and measured by local meteorological observations tracked by Arbol's data sources. +[Arbol](https://www.arbol.io/) is a software platform that connects agricultural entities like farmers and other weather-dependent parties with investors and other capital providers to insure and protect against weather-related risks. Arbol's platform sells contracts for parametric weather protection agreements in a marketplace that's an innovative, data-driven approach to risk management, cutting out the usual legacy insurance claims process of making loss assessments on the ground. Instead, Arbol relies on tamper-proof data indexes to determine payouts, and doesn't require a defined loss to be indemnified. Arbol's platform combines parametric weather protection with blockchain-based smart contracts to provide cost-efficient, automated, and user-defined weather-related risk hedging. As with traditional crop insurance and similar legacy products, end users purchase assurance that they'll be financially protected in the case of adverse weather — but with Arbol, these end users are paid automatically if adverse conditions occur, as defined by the contract and measured by local meteorological observations tracked by Arbol's data sources. To build the data indexes that Arbol uses to handle its contracts, the team aggregates and standardizes billions of data files comprising decades of weather information from a wide range of reputable sources — all of which is stored on IPFS. IPFS is critical to Arbol's service model due to the inherent verifiability provided by its [content-addressed architecture](../concepts/content-addressing.md), as well as a decentralized data delivery model that facilitates Arbol's day-to-day aggregation, synchronization, and distribution of massive amounts of data. @@ -88,7 +88,7 @@ Arbol's end users enjoy the "it just works" benefits of parametric protection, b 8. **Pinning and syncing:** When storage nodes in the Arbol network detect that a new hash has been added to the heads file, they run the standard, recursive [`ipfs pin -r`](../reference/kubo/cli.md#ipfs-pin) command on it. Arbol's primary active nodes don't need to be large in number: The network includes a single [gateway node](../concepts/ipfs-gateway.md) that bootstraps with all the parsing/hashing nodes, and a few large storage nodes that serve as the primary data storage backup. However, data is also regularly synced with "cold nodes" — archival storage nodes that are mostly kept offline — as well as on individual IPFS nodes on Arbol's developers' and agronomists' personal computers. -9. **Garbage collection:** Some older Arbol datasets require [garbage collection](../concepts/glossary.md#garbage-collection) whenever new data is added, due to a legacy method of overwriting old hashes with new hashes. However, all of Arbol's newer datasets use an architecture where old hashes are preserved and new posts reference the previous post. This methodology creates a linked list of hashes, with each hash containing a reference to the previous hash. As the length of the list becomes computationally burdensome, the system consolidates intermediate nodes and adds a new route to the head, creating a [DAG (directed acyclic graph)](../concepts/merkle-dag.md) structure. Heads are always stored in a master [heads.json reference file](https://gateway.arbolmarket.com/climate/hashes/heads.json) located on Arbol's command server. +9. **Garbage collection:** Some older Arbol datasets require [garbage collection](../concepts/glossary.md#garbage-collection) whenever new data is added, due to a legacy method of overwriting old hashes with new hashes. However, all of Arbol's newer datasets use an architecture where old hashes are preserved and new posts reference the previous post. This methodology creates a linked list of hashes, with each hash containing a reference to the previous hash. As the length of the list becomes computationally burdensome, the system consolidates intermediate nodes and adds a new route to the head, creating a [DAG (directed acyclic graph)](../concepts/merkle-dag.md) structure. Heads are always stored in a master [heads.json reference file](https://web.archive.org/web/20230318223234/https://gateway.arbolmarket.com/climate/hashes/heads.json) located on Arbol's command server. ### The tooling diff --git a/docs/case-studies/fleek.md b/docs/case-studies/fleek.md index 04eca0583..51b836533 100644 --- a/docs/case-studies/fleek.md +++ b/docs/case-studies/fleek.md @@ -3,6 +3,10 @@ title: 'Case study: Fleek' description: Explore some helpful use cases, ideas, and examples for the InterPlanetary File System (IPFS). --- +::: warning Fleek hosting discontinued +Fleek's IPFS hosting service was discontinued on January 31st, 2026. This case study is preserved for historical purposes. +::: + # Case study: Fleek ::: callout diff --git a/docs/concepts/README.md b/docs/concepts/README.md index 07d46acc6..f1c66af20 100644 --- a/docs/concepts/README.md +++ b/docs/concepts/README.md @@ -55,7 +55,6 @@ We're adding more documentation all the time and making ongoing revisions to exi - [Case study: Arbol](../case-studies/arbol.md) - [Case study: Audius](../case-studies/audius.md) -- [Case study: Fleek](../case-studies/fleek.md) - [Case study: LikeCoin](../case-studies/likecoin.md) - [Case study: Morpheus.Network](../case-studies/morpheus.md) - [Case study: Snapshot](../case-studies/snapshot.md) diff --git a/docs/concepts/cod.md b/docs/concepts/cod.md index e0f2649bc..ccad0b71d 100644 --- a/docs/concepts/cod.md +++ b/docs/concepts/cod.md @@ -11,7 +11,7 @@ IPFS users can perform CoD on IPFS data with the [Bacalhau platform](#bacalhau) ## Bacalhau -Bacalhau is a platform for fast, cost-efficient, secure, distributed computation. Bacalhau works by running jobs where the data is generated and stored, also referred to as Compute Over Data (or CoD). Using Bacalhau, you can streamline existing workflows without extensive refactoring by running arbitrary Docker containers and WebAssembly (Wasm) images as compute tasks. The name _Bacalhau_ was coined from the Portuguese word for "salted cod fish". +Bacalhau is a platform for fast, cost-efficient, secure, distributed computation. Bacalhau works by running jobs where the data is generated and stored, also referred to as Compute Over Data (or CoD). Using Bacalhau, you can streamline existing workflows without extensive refactoring by running arbitrary Docker containers and WebAssembly (WASM) images as compute tasks. The name _Bacalhau_ was coined from the Portuguese word for "salted cod fish". ### Features @@ -25,7 +25,7 @@ Bacalhau can: - Run against data [mounted anywhere](https://docs.bacalhau.org/#how-it-works) on your machine. - Integrate with services running on nodes to run jobs, such as [DuckDB](https://docs.bacalhau.org/examples/data-engineering/DuckDB/). - Operate at scale over parallel jobs and batch process petabytes of data. -- Auto-generate art using a [Stable Diffusion AI model](https://www.waterlily.ai/) trained on the chosen artist’s original works. +- Auto-generate art using a [Stable Diffusion AI model](https://web.archive.org/web/20250313163631/https://www.waterlily.ai/) trained on the chosen artist’s original works. ### More Bacalhau resources @@ -36,7 +36,7 @@ Bacalhau can: The InterPlanetary Virtual Machine (IPVM) specification defines the easiest, fastest, most secure, and open way to run decentralized compute jobs on IPFS. One way to describe IPVM would be as "an open, decentralized, and local-first competitor to AWS Lambda". -IPVM uses [WebAssembly (Wasm)](https://webassembly.org/), content addressing, [simple public key infrastructure (SPKI)](https://en.wikipedia.org/wiki/Simple_public-key_infrastructure), and object capabilities to liberate computation from specific, prenegotiated services, such as large cloud computing providers. By default, execution scales flexibly on-device, all the way up to edge points-of-presence (PoPs) and data centers. +IPVM uses [WebAssembly (WASM)](https://webassembly.org/), content addressing, [simple public key infrastructure (SPKI)](https://en.wikipedia.org/wiki/Simple_public-key_infrastructure), and object capabilities to liberate computation from specific, prenegotiated services, such as large cloud computing providers. By default, execution scales flexibly on-device, all the way up to edge points-of-presence (PoPs) and data centers. The core, Rust-based implementation and runtime of IPVM is the [Homestar project](https://github.com/ipvm-wg/homestar/). IPVM supports interoperability with [Bacalhau](https://bacalhau.org) and [Storacha (formerly web3.storage)](https://storacha.network/) diff --git a/docs/concepts/persistence.md b/docs/concepts/persistence.md index acc895f5f..c25ccdcf4 100644 --- a/docs/concepts/persistence.md +++ b/docs/concepts/persistence.md @@ -48,9 +48,7 @@ Some of the pinning services listed below are operated by third party companies. - [4EVERLAND Bucket](https://www.4everland.org/bucket/) - [Filebase](https://filebase.com/) -- [NFT.Storage](https://nft.storage/) - [Pinata](https://pinata.cloud/) -- [Scaleway](https://labs.scaleway.com/en/ipfs-pinning/) - [Storacha (formerly web3.storage)](https://storacha.network/) See how to [work with remote pinning services](../how-to/work-with-pinning-services.md). diff --git a/docs/concepts/privacy-and-encryption.md b/docs/concepts/privacy-and-encryption.md index d169b93ce..15160b36d 100644 --- a/docs/concepts/privacy-and-encryption.md +++ b/docs/concepts/privacy-and-encryption.md @@ -52,9 +52,6 @@ IPFS uses transport-encryption but not content encryption. This means that your ### Encryption-based projects using IPFS - [Ceramic](https://ceramic.network/) -- [Fission.codes](https://fission.codes/) -- [Fleek](../case-studies/fleek.md) - [Lit Protocol](https://litprotocol.com/) - [OrbitDB](https://github.com/orbitdb) - [Peergos](https://peergos.org/) -- [Textile](https://www.textile.io/) diff --git a/docs/how-to/best-practices-for-nft-data.md b/docs/how-to/best-practices-for-nft-data.md index d78be8e86..7a867df7e 100644 --- a/docs/how-to/best-practices-for-nft-data.md +++ b/docs/how-to/best-practices-for-nft-data.md @@ -139,9 +139,7 @@ When your data is stored on IPFS, users can fetch it from any IPFS node that has If you're building a platform using IPFS for storage, it's important to pin your data to IPFS nodes that are robust and highly available, meaning that they can operate without significant downtime and with good performance. See our [server infrastructure documentation][docs-server-infra] to learn how [IPFS Cluster][ipfs-cluster] can help you manage your own cloud of IPFS nodes that coordinate to pin your platform's data and provide it to your users. -Alternatively, you can delegate the infrastructure responsibility to a remote pinning service. Remote pinning services like [Pinata](https://pinata.cloud) and [Eternum](https://www.eternum.io/) provide redundant, highly-available storage for your IPFS data, without any _vendor lock-in_. Because IPFS-based content is addressed by CID instead of location, you can switch between pinning services or migrate to your private infrastructure seamlessly as your platform grows. - -You can also use a service from [Protocol Labs](https://protocol.ai) called [nft.storage](https://nft.storage) to get your data into IPFS, with long-term persistence backed by the decentralized [Filecoin](https://filecoin.io) storage network. To help foster the growth of the NFT ecosystem and preserve the new _digital commons_ of cultural artifacts that NFTs represent, [nft.storage](https://nft.storage) provides free storage and bandwidth for public NFT data. Sign up for a free account at [https://nft.storage](https://nft.storage) and try it out! +Alternatively, you can delegate the infrastructure responsibility to a remote pinning service. Remote pinning services like [Pinata](https://pinata.cloud), [Storacha](https://storacha.network/), and [Filebase](https://filebase.com/) provide redundant, highly-available storage for your IPFS data, without any _vendor lock-in_. Because IPFS-based content is addressed by CID instead of location, you can switch between pinning services or migrate to your private infrastructure seamlessly as your platform grows. To learn more about persistence and pinning, including how to work with remote pinning services, see our [overview of persistence, permanence, and pinning][docs-persistence]. diff --git a/docs/how-to/websites-on-ipfs/custom-domains.md b/docs/how-to/websites-on-ipfs/custom-domains.md index 6b7046dd5..11b941f5b 100644 --- a/docs/how-to/websites-on-ipfs/custom-domains.md +++ b/docs/how-to/websites-on-ipfs/custom-domains.md @@ -41,6 +41,5 @@ With this approach, users can access your website via a custom domain name, e.g. To provide access to the app directly via the custom domain, you have the following options: 1. Self-host both the IPFS provider (e.g. [Kubo](https://github.com/ipfs/kubo)) and the IPFS HTTP gateway (e.g. [Kubo](https://github.com/ipfs/kubo)). Deploy an IPFS Gateway that supports DNSLink resolution and point the `CNAME`/`A` DNS record for your custom domain to it and update the `TXT` record on `_dnslink` subdomain to match CID of your website. [See the guide on setting up a DNSLink gateway](./dnslink-gateway.md) for more details. -2. Use a service like Fleek which encompasses both DNSLink and traditional web hosting (HTTP + TLS + CDN + [automatic DNSLink management](https://fleek.xyz/docs/platform/domains/#dnslink)). -3. Deploy the site to a web hosting service like [Cloudflare Pages](https://pages.cloudflare.com/) or [GitHub Pages](https://pages.github.com/) with a custom domain (pointing and configuring the `CNAME`/`A` record for your custom domain on the web hosting service), while managing the DNSLink `TXT` record on `_dnslink` subdomain separately, essentially getting the benefits of both IPFS and traditional web hosting. Remember to set up CI automation to update the DNSLink `TXT` record for every deployment that changes the CID. +2. Deploy the site to a web hosting service like [Cloudflare Pages](https://pages.cloudflare.com/) or [GitHub Pages](https://pages.github.com/) with a custom domain (pointing and configuring the `CNAME`/`A` record for your custom domain on the web hosting service), while managing the DNSLink `TXT` record on `_dnslink` subdomain separately, essentially getting the benefits of both IPFS and traditional web hosting. Remember to set up CI automation to update the DNSLink `TXT` record for every deployment that changes the CID. diff --git a/docs/how-to/websites-on-ipfs/images/introducing-fleek/add-or-buy-domain.png b/docs/how-to/websites-on-ipfs/images/introducing-fleek/add-or-buy-domain.png deleted file mode 100644 index 9288e7012..000000000 Binary files a/docs/how-to/websites-on-ipfs/images/introducing-fleek/add-or-buy-domain.png and /dev/null differ diff --git a/docs/how-to/websites-on-ipfs/images/introducing-fleek/deployment-information-window.png b/docs/how-to/websites-on-ipfs/images/introducing-fleek/deployment-information-window.png deleted file mode 100644 index 13725421c..000000000 Binary files a/docs/how-to/websites-on-ipfs/images/introducing-fleek/deployment-information-window.png and /dev/null differ diff --git a/docs/how-to/websites-on-ipfs/images/introducing-fleek/fleek-homepage.png b/docs/how-to/websites-on-ipfs/images/introducing-fleek/fleek-homepage.png deleted file mode 100644 index 5b4fa88ae..000000000 Binary files a/docs/how-to/websites-on-ipfs/images/introducing-fleek/fleek-homepage.png and /dev/null differ diff --git a/docs/how-to/websites-on-ipfs/images/introducing-fleek/fleek-showing-the-website-repo-options.png b/docs/how-to/websites-on-ipfs/images/introducing-fleek/fleek-showing-the-website-repo-options.png deleted file mode 100644 index c0b81cd2f..000000000 Binary files a/docs/how-to/websites-on-ipfs/images/introducing-fleek/fleek-showing-the-website-repo-options.png and /dev/null differ diff --git a/docs/how-to/websites-on-ipfs/images/introducing-fleek/github-repo-showing-a-few-files.png b/docs/how-to/websites-on-ipfs/images/introducing-fleek/github-repo-showing-a-few-files.png deleted file mode 100644 index 434226cbf..000000000 Binary files a/docs/how-to/websites-on-ipfs/images/introducing-fleek/github-repo-showing-a-few-files.png and /dev/null differ diff --git a/docs/how-to/websites-on-ipfs/introducing-fleek.md b/docs/how-to/websites-on-ipfs/introducing-fleek.md deleted file mode 100644 index 9e4ff0e26..000000000 --- a/docs/how-to/websites-on-ipfs/introducing-fleek.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -title: Introducing Fleek -description: Fleek is a service that lets you host a website on IPFS without needing to install anything on your computer or run command-line scripts. ---- - -# Introducing Fleek - -Most of the steps we've covered in this tutorial series have been fairly manual. Wouldn't it be nice if there were a service that did all the busy work for you, so you could focus on hosting great websites on IPFS? That's where Fleek comes in! - -![The Fleek homepage, showing a "Build on the New Internet" slogan at the top.](./images/introducing-fleek/fleek-homepage.png) - -Fleek is a service that lets you host a website on IPFS without needing to install anything on your computer or deal with the command-line. - -We already know that files and folders on IPFS are content-addressed, meaning you can find them using the hash of their content. If the content changes, then the hash changes too. As we've seen in previous lessons, this can be an issue when it comes to updating a website. A single character change to an HTML file will create an entirely new hash! - -Fleek offers a simple workflow. Once you've pushed your changes to git, Fleek builds, pins, and updates your site. The service also integrates well with React, Next.js, Gatsby, Jekyll, Hugo, and [a bunch of other popular development frameworks](https://docs.fleek.co/hosting/site-deployment/#common-frameworks). You can also manage your domains through Fleek, and monitor your sites in a similar method to traditional web development. - -If you're looking to host a fast website on IPFS, Fleek is a great option! For more information check out [Fleek.co](https://fleek.co) and [their documentation](https://docs.fleek.co/). - -## Host a site - -If you've never used a service like Fleek, or just need a refresher, this quick guide walks through adding a website to a GitHub repository and linking that repo to your Fleek account. - -We're going to re-use the Random Planet Facts site we created in a previous tutorial. If you've been following this tutorial series, you should already have this project ready to go! If you don't, just download the [project `.zip` here](https://github.com/johnnymatthews/random-planet-facts/archive/master.zip) or [clone this repository](https://github.com/johnnymatthews/random-planet-facts). - -### Upload to GitHub - -If you cloned the Random Planet Facts repo above, you don't need to follow this section. - -1. Log into [GitHub](https://github.com). -1. Create a new repository and upload the Random Planet Facts project. -1. Your project repository should look something like this: - - ![A GitHub repository showing an index.html file, a style.css file, and an image file.](./images/introducing-fleek/github-repo-showing-a-few-files.png) - -### Add a repository to Fleek - -1. Go to [Fleek.co](https://fleek.co/) and sign in using your GitHub account. You may need to allow Fleek to access your GitHub profile. -1. Once logged in, click **Add new site**. -1. Select **Connect with GitHub** and find the site that you want to host on IPFS. -1. Leave all the options with their default settings. Since we're not dealing with a special framework or a repository with lots of branches we don't have to change anything here. - - ![Fleek showing the website repository options page.](./images/introducing-fleek/fleek-showing-the-website-repo-options.png) - -1. Click **Deploy site**. Fleek will add your site into the build queue. Once it's done you can click **Verify on IPFS** to view your site! - - ![Deployment information window within Fleek.](./images/introducing-fleek/deployment-information-window.png) - -## Domain names - -Fleek allows you to configure your domain names with your sites on IPFS! No more wrangling with DNSlink or IPNS. You can even buy domains directly through Fleek. Click **Add or Buy Domain** to get started. [Check out the Fleek documentation for more information on how to get your domain linked up →](https://docs.fleek.co/domain-management/overview/) - -![A black button leading to the domain section of Fleek](./images/introducing-fleek/add-or-buy-domain.png) - -## Up next - -For the final tutorial in this series, we're going to take a quick look at [static-site generators, and how to host them on IPFS](static-site-generators.md). diff --git a/docs/how-to/work-with-pinning-services.md b/docs/how-to/work-with-pinning-services.md index d2718a147..362b8c993 100644 --- a/docs/how-to/work-with-pinning-services.md +++ b/docs/how-to/work-with-pinning-services.md @@ -45,7 +45,6 @@ Third-party pinning services allow you to purchase pinning capacity for importan - [Pinata](https://pinata.cloud/) - [Filebase](https://filebase.com/) - [Storacha (formerly web3.storage)](https://storacha.network/) -- [Infura](https://infura.io/) ::: callout As of June 2023, [Filebase](https://filebase.com) and [Pinata](https://pinata.cloud/) support the [IPFS Pinning Service API endpoint](https://github.com/ipfs/pinning-services-api-spec).