Skip to content
Snippets Groups Projects
Select Git revision
  • main default protected
1 result

ipfs-datapod

  • Clone with SSH
  • Clone with HTTPS
  • Duniter Datapod

    Duniter Datapod is designed for offchain storage of Ğ1 data but does not depend on Duniter and could be used independantly. It contains multiple components:

    • a "collector" which listens on pubsub and puts index requests in a timestamped AMT
    • an "indexer" which takes index requests of specific kinds and put them in a Postgres database to serve contente with Hasura GraphQL API
    • a dev tool in Vue to understand the architecture, explore the data, and debug

    scheme

    sheme of data flow in Duniter Datapod

    Use

    To start a full indexer in production mode with docker, use the docker-compose.prod.yml file:

    # start
    docker compose up -d

    This will pull preconfigured images for postgres/hasura, kubo and datapod. This should:

    • connect to existing network
    • start collecting from default IPNS
    • index to database

    You can then do a simple proxy_pass to HASURA_LISTEN_PORT and KUBO_GATEWAY_PORT.

    Dev

    Install dev dependencies

    # use node version 20
    nvm use 20
    # install dependencies
    pnpm install

    Start a kubo node with pubsub and postgres/hasura

    # start kubo node TODO put this in the docker
    ipfs daemon --enable-pubsub-experiment
    # start postgres / hasura
    docker compose up -d

    Copy edit and load your .env file

    # copy from template
    cp .env.example .env
    # adapt .env file then export variables
    source .env.sh

    And start what you want (indexer, dev tool, c+ import...)

    # run dev tool app
    pnpm dev
    # run given script
    pnpm exec tsx ./src/script/hello.ts

    More detail in the doc below.

    Doc

    TODO

    Bugs

    • initialize dd_keys for new node (→ bootstrap)
    • fix merging blocked when inode unreachable, timeout seems ignored
    • fix pubsub cid can not be fetched triggers pubsub abort (dirty workaround for know)
    • fix UND_ERR_HEADERS_TIMEOUT that happen very often when pinning 📌 Features
    • pubkey instead of ss58 address if we want data to be compatible across networks → ss58
    • add periodic sync with a list of trusted peers (IPNS entries)
    • split indexer vue app from backend indexer and improve node admin app
      • clarify the purpose of the the main TAMT
      • clarify the adressing format in the tables
      • add domain specific index for profile for example
      • add a refcount to count the number of documents
      • make the app build in prod mode
      • allow connecting the app to a custom RPC endpoint
    • manage unpin requests when user/admin wants to delete data, see refcount
    • document dev database change with tracking hasura console and squashing migrations
    • add transaction comment (onchain + offchain to allow unified search)
    • add version history to database (history of index request CIDs) -> not systematic
    • update description of pubkey field to "ss58/address"
    • add ability to remove a node as well as its parent if it leaves it empty
    • make base custom per tree (base 16, base 32)
    • [ ]