A Squid project to index snowbridge transfers. It accumulates transfer events from multiple chains(i.e. Ethereum, Bridgehub, Assethub) and serves them via GraphQL API.
- node 20.x
- docker
- npm -- note that
yarnpackage manager is not supported
Example commands below use sqd. Please install it before proceeding.
# 1. Install dependencies
npm ci
# 2. Copy the env and make change if necessary
cp .env.example .env
# 3. Start target Postgres database and detach
sqd up
# 4. Build the project
sqd build
# 5. Generate database migration
sqd migration:clean && sqd migration && sqd migration:apply
# 6. Start the squid processor for ethereum
sqd process:ethereum
# 7. Start the squid processor for bridgehub
sqd process:bridgehub
# 8. Start the squid processor for assethub
sqd process:assethub
# 9. Start the graphql api
sqd serveA GraphiQL playground will be available at localhost:4350/graphql.
Some chain processors are not deployed to production but can be run locally for testing or development purposes. These include processors for Mythos, Moonbeam, Neuroweb, and Kusama AssetHub. This repository includes a ecosystem.config.js file that configures PM2 to run these processors.
- Install PM2 globally:
npm install -g pm2 - Complete steps 1-5 from Quickly running the sample
Create a .env file in the root directory with the following variables:
# Mythos Processor
RPC_MYTHOS=wss://polkadot-mythos-rpc.polkadot.io
START_BLOCK_MYTHOS=2542302
# Moonbeam Processor
RPC_MOONBEAM=wss://moonbeam.ibp.network
START_BLOCK_MOONBEAM=8165770
# Neuroweb Processor
RPC_NEUROWEB=wss://parachain-testnet-rpc.origin-trail.network
START_BLOCK_NEUROWEB=10969079
# Kusama AssetHub Processor
RPC_KUSAMA_ASSETHUB=wss://statemine.api.onfinality.io/public-ws
START_BLOCK_KUSAMA_ASSETHUB=9395148
# Optional: Enable debug logging
# SQD_DEBUG=*Note: The default values shown above are the recommended RPC endpoints and start blocks. You can customize these values as needed for your specific use case.
To start all processors:
pm2 start ecosystem.config.jsTo start a specific processor:
pm2 start ecosystem.config.js --only mythos
pm2 start ecosystem.config.js --only moonbeam
pm2 start ecosystem.config.js --only neuroweb
pm2 start ecosystem.config.js --only kusama-assethub# View status of all processors
pm2 status
# View logs
pm2 logs
# View logs for a specific processor
pm2 logs mythos
# Stop all processors
pm2 stop all
# Stop a specific processor
pm2 stop mythos
# Restart all processors
pm2 restart all
# Delete all processors from PM2
pm2 delete allStart development by defining the schema of the target database via schema.graphql.
Schema definition consists of regular graphql type declarations annotated with custom directives.
Full description of schema.graphql dialect is available here.
Mapping developers use TypeORM entities
to interact with the target database during data processing. All necessary entity classes are
generated by the squid framework from schema.graphql. This is done by running npx squid-typeorm-codegen
or (equivalently) sqd codegen command.
All database changes are applied through migration files located at db/migrations.
squid-typeorm-migration(1) tool provides several commands to drive the process.
It is all TypeORM under the hood.
# Connect to database, analyze its state and generate migration to match the target schema.
# The target schema is derived from entity classes generated earlier.
# Don't forget to compile your entity classes beforehand!
npx squid-typeorm-migration generate
# Create template file for custom database changes
npx squid-typeorm-migration create
# Apply database migrations from `db/migrations`
npx squid-typeorm-migration apply
# Revert the last performed migration
npx squid-typeorm-migration revertAvailable sqd shortcuts:
# Build the project, remove any old migrations, then run `npx squid-typeorm-migration generate`
sqd migration:generate
# Run npx squid-typeorm-migration apply
sqd migration:applyThis is an optional part, but it is very advisable.
Event, call and runtime storage data come to mapping handlers as raw untyped json. While it is possible to work with raw untyped json data, it's extremely error-prone and the json structure may change over time due to runtime upgrades.
Squid framework provides a tool for generating type-safe wrappers around events, calls and runtime storage items for each historical change in the spec version. See the typegen page for different chains.
Squid tools assume a certain project layout.
- All compiled js files must reside in
liband all TypeScript sources insrc. The layout oflibmust reflectsrc. - All TypeORM classes must be exported by
src/model/index.ts(lib/modelmodule). - Database schema must be defined in
schema.graphql. - Database migrations must reside in
db/migrationsand must be plain js files. squid-*(1)executables consult.envfile for a number of environment variables.
See the full desription in the documentation.
Basically transfer status should be resolved by these two queries.
- transferStatusToPolkadots
- transferStatusToEthereums
It is possible to extend squid-graphql-server(1) with custom
type-graphql resolvers and to add request validation.
For more details, consult docs.
Follow the guides in:
first login with the api key with:
sqd auth -k YOUR_API_TOKEN
then deploy to cloud with:
sqd deploy --org snowbridge .
UI or 3rd teams can query transfers through Snowbridge from this indexer, explore https://snowbridge.squids.live/snowbridge-subsquid-polkadot@v1/api/graphql for the querys we support.
For easy usage we aggregate all data to two queries, which is transferStatusToEthereums for direction to ethereum and transferStatusToPolkadots for the other direction. A demo script for reference:
./scripts/query-transfers.sh
and the result is something like:
"transferStatusToPolkadots": [
{
"txHash": "0x53597b6f98334a160f26182398ec3e7368be8ca7aea3eea41d288046f3a1999d",
"status": 1, // 0:pending, 1: completed 2: failed
"channelId": "0xc173fac324158e77fb5840738a1a541f633cbec8884c6a601c567d2b376a0539",
"destinationAddress": "0x628119c736c0e8ff28bd2f42920a4682bd6feb7b000000000000000000000000",
"messageId": "0x00d720d39256bab74c0be362005b9a50951a0909e6dabda588a5d319bfbedb65",
"nonce": 561,
"senderAddress": "0x628119c736c0e8ff28bd2f42920a4682bd6feb7b",
"timestamp": "2025-01-20T07:09:47.000000Z",
"tokenAddress": "0xba41ddf06b7ffd89d1267b5a93bfef2424eb2003",
"amount": "68554000000000000000000"
},
...
],
"transferStatusToEthereums": [
{
"txHash": "0xb57627dbcc89be3bdaf465676fced56eeb32d95855db003f1e911aa4c3769059",
"status": 1, // 0:pending, 1: completed 2: failed
"channelId": "0xc173fac324158e77fb5840738a1a541f633cbec8884c6a601c567d2b376a0539",
"destinationAddress": "0x2a9b5c906c6cac92dc624ec0fa6c3b4c9f2e7cc2",
"messageId": "0x95c52ffe4f976c99bcfe8d76f6011e62b7f215ada834e8c0bcf6538b31b1bf87",
"nonce": 152,
"senderAddress": "0x4a79eee26f5dab7c230f7f2c8657cb541a4b8e391c8357f5eb51413f249ddc13",
"timestamp": "2025-01-20T04:10:48.000000Z",
"tokenAddress": "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2",
"amount": "8133242931806029953"
},
...
]