Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • master
  • network/gtest-1100
  • 331-tests-de-distance-qui-ne-passent-plus
  • backup/network/gtest-1100
  • network/gtest-1110
  • tuxmain/gtest-fix-c2
  • set_UniversalDividendApi_in_RuntimeApiCollection
  • network/gtest-1000
  • upgradable-multisig
  • runtime/gtest-1000
  • network/gdev-800
  • cgeek/issue-297-cpu
  • gdev-800-tests
  • update-docker-compose-rpc-squid-names
  • fix-252
  • 1000i100-test
  • hugo/tmp-0.9.1
  • network/gdev-803
  • hugo/endpoint-gossip
  • network/gdev-802
  • hugo/distance-precompute
  • network/gdev-900
  • tuxmain/anonymous-tx
  • debug/podman
  • hugo/195-doc
  • hugo/195-graphql-schema
  • hugo-tmp-dockerfile-cache
  • release/client-800.2
  • release/runtime-800
  • feature/show_milestone
  • release/runtime-701
  • hugo-release/runtime-701
  • release/runtime-700
  • pini-check-password
  • tuxmain/benchmark-distance
  • release/runtime-600
  • feature/dc-dump
  • tests/distance-with-oracle
  • release/runtime-500
  • feature/distance-rule
  • release/hugo-chainspec-gdev5
  • release/runtime-401
  • 105_gitlab_container_registry
  • ci_cache
  • release/poka-chainspec-gdev5-pini-docker
  • release/poka-chainspec-gdev5
  • release/runtime-400
  • release/runtime-300
  • release/runtime-200
  • release/runtime-100
  • elois-duniter-storage
  • elois-compose-metrics
  • elois-smoldot
  • gdev-1000-test
  • gdev-800
  • gdev-800-0.8.0
  • gdev-802
  • gdev-803
  • gdev-900-0.10.0
  • gdev-900-0.10.1
  • gdev-900-0.9.0
  • gdev-900-0.9.1
  • gdev-900-0.9.2
  • gtest-1000
  • gtest-1000-0.11.0
  • gtest-1000-0.11.1
  • gtest-1100
  • gtest-1100-0.12.0
  • runtime-100
  • runtime-101
  • runtime-102
  • runtime-103
  • runtime-104
  • runtime-105
  • runtime-1110
  • runtime-200
  • runtime-201
  • runtime-300
  • runtime-301
  • runtime-302
  • runtime-303
  • runtime-400
  • runtime-401
  • runtime-500
  • runtime-600
  • runtime-700
  • runtime-701
  • runtime-800
  • runtime-800-backup
  • runtime-800-bis
  • runtime-801
  • v0.1.0
  • v0.2.0
  • v0.3.0
  • v0.4.0
  • v0.4.1
96 results

Target

Select target project
No results found
Select Git revision
  • upgrade_polkadot_v0.9.42
  • archive_upgrade_polkadot_v0.9.42
  • pallet-benchmark
  • master
  • jrx/workspace_tomls
  • hugo-gtest
  • hugo-rework-genesis
  • hugo-remove-duniter-account
  • hugo-tmp
  • release/poka-chainspec-gdev5-pini-docker
  • release/poka-chainspec-gdev5
  • ud-time-64
  • distance
  • david-wot-scenarios-cucumber
  • release/runtime-400
  • elois-revoc-with-old-key
  • elois-smish-members-cant-change-or-rem-idty
  • release/runtime-300
  • elois-fix-idty-post-genesis
  • elois-fix-sufficients-change-owner-key
  • test-gen-new-owner-key-msg
  • elois-substrate-v0.9.23
  • elois-technical-commitee
  • elois-opti-cert
  • release/runtime-200
  • elois-fix-85
  • elois-rework-certs
  • elois-remove-renewable-period
  • elois-ci-binary-release
  • release/runtime-100
  • elois-duniter-storage
  • elois-compose-metrics
  • elois-smoldot
  • no-bootnodes
  • ts-types
  • runtime-100
  • runtime-101
  • runtime-102
  • runtime-103
  • runtime-104
  • runtime-105
  • runtime-200
  • runtime-201
  • runtime-300
  • runtime-301
  • runtime-302
  • runtime-303
  • runtime-400
  • v0.1.0
  • v0.2.0
  • v0.3.0
  • v0.4.0
52 results
Show changes
437 files
+ 348329
113956
Compare changes
  • Side-by-side
  • Inline

Files

.cargo/config

deleted100644 → 0
+0 −8
Original line number Diff line number Diff line
[alias]
cucumber = "test -p duniter-end2end-tests --test cucumber_tests --"
sanity-gdev = "test -p duniter-live-tests --test sanity_gdev -- --nocapture"
tu = "test --workspace --exclude duniter-end2end-tests --exclude duniter-live-tests"
tb = "test --features runtime-benchmarks -p"
rbp = "run --release --features runtime-benchmarks -- benchmark pallet --chain=dev --steps=50 --repeat=20 --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=. --pallet"
xtask = "run --package xtask --"

.cargo/config.toml

0 → 100644
+13 −0
Original line number Diff line number Diff line
[alias]
sanity-gdev = "test -Zgit=shallow-deps -p duniter-live-tests --test sanity_gdev -- --nocapture"
tu = "test -Zgit=shallow-deps --workspace --exclude duniter-end2end-tests --exclude duniter-live-tests --features constant-fees" # Unit tests with constant-fees
tf = "test -Zgit=shallow-deps --workspace --exclude duniter-end2end-tests --exclude duniter-live-tests test_fee" # Custom fee model tests
# `te` and `cucumber` are synonyms
te = "test -p duniter-end2end-tests --test cucumber_tests --features constant-fees --"
cucumber-build = "build -Zgit=shallow-deps --features constant-fees"
cucumber = "test -Zgit=shallow-deps -p duniter-end2end-tests --test cucumber_tests --"
ta = "test -Zgit=shallow-deps --workspace --exclude duniter-live-tests --features constant-fees"
tb = "test -Zgit=shallow-deps --features runtime-benchmarks -p"
rbp = "run -Zgit=shallow-deps --release --features runtime-benchmarks -- benchmark pallet --chain=dev --steps=50 --repeat=20 --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=. --pallet"
xtask = "run -Zgit=shallow-deps --package xtask --"
cucumber-node = "run -Zgit=shallow-deps -- --chain=gdev_dev --execution=Native --sealing=manual --force-authoring --rpc-cors=all --tmp --rpc-port 9944 --alice --features constant-fees"
+1 −0
Original line number Diff line number Diff line
@@ -6,3 +6,4 @@ docker/Dockerfile
docker-compose.yml
arm-build/
**/target/
build/
+9 −0
Original line number Diff line number Diff line
@@ -27,3 +27,12 @@ tmp

# Log files
*.log

# Ignore output folder
output/
g1-dump.tgz
/release/

node/specs/gdev-raw.json
node/specs/gtest-raw.json
node/specs/g1-raw.json
+204 −218
Original line number Diff line number Diff line
# Runner tags:
# - podman: use 'podman' to build multiplatform images

stages:
  - schedule
  - labels
  - quality
  - build
  - tests
  - release
  - deploy
  - deploy_readme

# Job templates for release builds (without tags/image, set by jobs)
.release_rules:
  stage: release
  rules:
    - if: $CI_COMMIT_TAG
      when: never
    - if: $CI_COMMIT_BRANCH =~ /^network\//
      when: manual
    - when: never

.debian_build_template:
  extends: .release_rules
  before_script:
    # Install build dependencies
    - apt-get update -qq && apt-get install -y -qq protobuf-compiler clang libclang-dev
  script:
    - cargo xtask client-build-deb $NETWORK

.rpm_build_template:
  extends: .release_rules
  before_script:
    # Install RPM build tools and dependencies
    - apt-get update -qq && apt-get install -y -qq rpm protobuf-compiler clang libclang-dev
  script:
    - cargo xtask client-build-rpm $NETWORK

.docker_deploy_template:
  extends: .release_rules
  image: docker:latest
  variables:
    DOCKER_HOST: unix:///var/run/docker.sock
  before_script:
    # Install Rust and build dependencies
    - apk add --no-cache curl bash gcc musl-dev
    - curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain stable --profile minimal
    - source $HOME/.cargo/env
    - docker info
  script:
    - cargo xtask client-docker-deploy $NETWORK --arch $ARCH

# Release build jobs
release_debian_arm:
  extends:
    - .env_arm
    - .debian_build_template
  needs: []
  artifacts:
    paths:
      - target/debian/*.deb

release_debian_x64:
  extends:
    - .env
    - .debian_build_template
  needs: []
  artifacts:
    paths:
      - target/debian/*.deb

release_rpm_arm:
  extends:
    - .env_arm
    - .rpm_build_template
  needs: []
  artifacts:
    paths:
      - target/generate-rpm/*.rpm

release_rpm_x64:
  extends:
    - .env
    - .rpm_build_template
  needs: []
  artifacts:
    paths:
      - target/generate-rpm/*.rpm

release_docker_arm:
  extends: .docker_deploy_template
  needs: []
  tags:
    - linuxARM
  variables:
    ARCH: arm64

release_docker_x64:
  extends: .docker_deploy_template
  needs: []
  tags:
    - kepler
  variables:
    ARCH: amd64

release_docker_manifest:
  stage: release
  rules:
    - if: $CI_COMMIT_TAG
      when: never
    - if: $CI_COMMIT_BRANCH =~ /^network\//
      when: on_success  # Auto-start when dependencies succeed
    - when: never
  needs:
    - job: release_docker_arm
      optional: false
    - job: release_docker_x64
      optional: false
  tags:
    - kepler
  image: docker:latest
  variables:
    DOCKER_HOST: unix:///var/run/docker.sock
  before_script:
    - apk add --no-cache bash grep sed
  script:
    - docker login -u duniterteam -p $DUNITERTEAM_PASSWD docker.io
    - |
      # Extract runtime from NETWORK (e.g., gtest-1100 -> gtest)
      RUNTIME=$(echo $NETWORK | sed 's/-[0-9].*//')
      
      # Get client version from node/Cargo.toml
      CLIENT_VERSION=$(grep '^version = ' node/Cargo.toml | head -1 | sed 's/.*"\(.*\)".*/\1/')
      
      # Get runtime version from runtime/$RUNTIME/src/lib.rs
      RUNTIME_VERSION=$(grep 'spec_version:' runtime/$RUNTIME/src/lib.rs | sed 's/.*spec_version: \([0-9]*\).*/\1/')
      
      IMAGE_NAME="duniter/duniter-v2s-${NETWORK}"
      TAG="${RUNTIME_VERSION}-${CLIENT_VERSION}"
      
      echo "Creating multi-arch manifest for ${IMAGE_NAME}:${TAG}"
      
      # Use buildx imagetools to create multi-arch tags from existing images
      # This works even if the source images are manifest lists
      docker buildx imagetools create \
        --tag ${IMAGE_NAME}:${TAG} \
        ${IMAGE_NAME}:${TAG}-amd64 \
        ${IMAGE_NAME}:${TAG}-arm64
      
      echo "✅ Multi-arch tag created: ${IMAGE_NAME}:${TAG}"
      
      # Also create :latest tag
      docker buildx imagetools create \
        --tag ${IMAGE_NAME}:latest \
        ${IMAGE_NAME}:${TAG}-amd64 \
        ${IMAGE_NAME}:${TAG}-arm64
      
      echo "✅ Multi-arch tag created: ${IMAGE_NAME}:latest"

workflow:
  rules:
    - changes:
@@ -18,6 +169,7 @@ workflow:
        - .gitlab-ci.yml
        - Cargo.toml
        - Cargo.lock
        - resources/*.yaml

sanity_tests:
  extends: .env
@@ -37,10 +189,24 @@ check_labels:
  script:
    - ./scripts/check_labels.sh $CI_MERGE_REQUEST_LABELS $CI_MERGE_REQUEST_MILESTONE

check_metadata:
  extends: .env
  stage: tests
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
    - when: never
  script:
    - ./scripts/check_metadata.sh

.env:
  image: paritytech/ci-linux:production
  image: paritytech/ci-unified:bullseye-1.88.0
  tags:
    - dind
    - kepler

.env_arm:
  image: rust:latest  # Rust image for ARM builds
  tags:
    - linuxARM

fmt_and_clippy:
  extends: .env
@@ -50,244 +216,64 @@ fmt_and_clippy:
    - if: '$CI_COMMIT_TAG || $CI_COMMIT_BRANCH == "master"'
      when: never
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
      when: always
    - when: manual
  stage: quality
  script:
    - cargo fmt -- --version
    - cargo fmt -- --check
    - cargo clippy -- -V
    - cargo clippy --all --tests -- -D warnings

build_debug:
  extends: .env
  rules:
    - if: $CI_COMMIT_TAG
      when: never
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "master"'
      changes:
      - Cargo.lock
    - when: never
  stage: build
  script:
    - cargo clean -p duniter
    - cargo build --locked
    - mkdir build
    - mv target/debug/duniter build/duniter
  artifacts:
    paths:
      - build/
    expire_in: 3 day
  cache:
    - key:
        files:
          - Cargo.lock
      paths:
        - target/debug
      policy: push
    - cargo clippy -Zgit=shallow-deps --features runtime-benchmarks --all --tests -- -D warnings

build_debug_with_cache:
  extends: .env
  rules:
    - changes:
      - Cargo.lock
      when: never
    - if: $CI_COMMIT_TAG
      when: never
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "master"'
    - when: never
  stage: build
  script:
    - cargo clean -p duniter
    - cargo build --locked
    - mkdir build
    - mv target/debug/duniter build/duniter
  artifacts:
    paths:
      - build/
    expire_in: 3 day
  cache:
    - key:
        files:
          - Cargo.lock
      paths:
        - target/debug
      policy: pull

build_release:
  extends: .env
  rules:
    - if: "$CI_COMMIT_TAG && $CI_COMMIT_TAG =~ /^v*/"
    - when: never
  stage: build
  script:
    - cargo build --locked --release
    - mkdir build
    - mv target/release/duniter build/duniter
  artifacts:
    paths:
      - build/
    expire_in: 3 day

build_release_manual:
run_benchmarks:
  extends: .env
  stage: tests
  rules:
    - if: $CI_COMMIT_REF_NAME =~ /^wip*$/
      when: manual
    - if: $CI_COMMIT_TAG
      when: never
    - if: '$CI_MERGE_REQUEST_ID || $CI_COMMIT_BRANCH == "master"'
    - when: manual
  stage: build
  allow_failure: true
  script:
    - cargo build --locked --release
    - mkdir build
    - mv target/release/duniter build/duniter
  artifacts:
    paths:
      - build/
    expire_in: 3 day
    - cargo build -Zgit=shallow-deps --release --features runtime-benchmarks
    - target/release/duniter benchmark storage --chain=dev --mul=2 --state-version=1 --weight-path=./runtime/g1/src/weights/ --batch-size=100
    - target/release/duniter benchmark overhead --chain=dev --wasm-execution=compiled --warmup=1 --repeat=100 --weight-path=./runtime/g1/src/weights/
    - target/release/duniter benchmark pallet --chain=dev --steps=5 --repeat=2 --pallet="*" --extrinsic="*" --wasm-execution=compiled --output=./runtime/g1/src/weights/
    - cargo build -Zgit=shallow-deps --release --features runtime-benchmarks # Check if autogenerated weights work

tests_debug:
gtest_build:
  stage: build
  extends: .env
  rules:
    - if: $CI_COMMIT_REF_NAME =~ /^wip*$/
      when: manual
    - if: $CI_COMMIT_TAG
      when: never
    - if: $CI_COMMIT_BRANCH =~ /^(release\/runtime-)[0-9].*/
      when: never
    - if: '$CI_MERGE_REQUEST_ID || $CI_COMMIT_BRANCH == "master"'
    - when: manual
  stage: tests
  variables:
    DUNITER_BINARY_PATH: "../build/duniter"
    DUNITER_END2END_TESTS_SPAWN_NODE_TIMEOUT: "20"
    DEBIAN_FRONTEND: noninteractive
  script:
    - cargo test --workspace --exclude duniter-end2end-tests --exclude duniter-live-tests
    - cargo cucumber -i account_creation*
    - cargo cucumber -i certification*
    - cargo cucumber -i identity_creation*
    - cargo cucumber -i monetary_mass*
    - cargo cucumber -i oneshot_account*
    - cargo cucumber -i transfer_all*
  after_script:
    - cd target/debug/deps/
    - rm cucumber_tests-*.d
    - mv cucumber_tests* ../../../build/duniter-cucumber
  artifacts:
    paths:
      - build/
    expire_in: 3 day
    - cargo build -Zgit=shallow-deps --no-default-features --features gtest

tests_release:
  extends: .env
  rules:
    - if: "$CI_COMMIT_TAG && $CI_COMMIT_TAG =~ /^v*/"
    - when: never
tests:
  stage: tests
  variables:
    DUNITER_BINARY_PATH: "../build/duniter"
    DUNITER_END2END_TESTS_SPAWN_NODE_TIMEOUT: "20"
  script:
    - cargo test --workspace --exclude duniter-end2end-tests --exclude duniter-live-tests
    - cargo cucumber -i account_creation*
    - cargo cucumber -i certification*
    - cargo cucumber -i identity_creation*
    - cargo cucumber -i monetary_mass*
    - cargo cucumber -i oneshot_account*
    - cargo cucumber -i transfer_all*
  after_script:
    - cd target/debug/deps/
    - rm cucumber_tests-*.d
    - mv cucumber_tests* ../../../build/duniter-cucumber
  artifacts:
    paths:
      - build/
    expire_in: 3 day
  dependencies:
    - build_release

.docker-build-app-image:
  stage: deploy
  image: docker:18.06
  tags:
    - docker
  services:
    - docker:dind
  before_script:
    - docker info
  script:
    - docker pull $CI_REGISTRY_IMAGE:$IMAGE_TAG || true
    - docker build --cache-from $CI_REGISTRY_IMAGE:$IMAGE_TAG --pull -t "$CI_REGISTRY_IMAGE:$IMAGE_TAG" -f $DOCKERFILE_PATH .
    - docker login -u "duniterteam" -p "$DUNITERTEAM_PASSWD"
    - docker tag "$CI_REGISTRY_IMAGE:$IMAGE_TAG" "duniter/duniter-v2s:$IMAGE_TAG"
    - docker push "duniter/duniter-v2s:$IMAGE_TAG"

deploy_docker_test_image:
  extends: .docker-build-app-image
  extends: .env
  rules:
    - if: $CI_COMMIT_REF_NAME =~ /^wip*$/
      when: manual
    - if: '$CI_COMMIT_TAG || $CI_COMMIT_BRANCH == "master"'
      when: never
    - when: manual
  allow_failure: true
  variables:
    DOCKERFILE_PATH: "docker/Dockerfile"
    IMAGE_TAG: "test-image-$CI_COMMIT_SHORT_SHA"

deploy_docker_debug_sha:
  extends: .docker-build-app-image
  rules:
    - if: $CI_COMMIT_TAG
      when: never
    - if: $CI_COMMIT_BRANCH == "master"
  variables:
    DOCKERFILE_PATH: "docker/Dockerfile"
    IMAGE_TAG: "debug-sha-$CI_COMMIT_SHORT_SHA"
  after_script:
    - docker login -u "duniterteam" -p "$DUNITERTEAM_PASSWD"
    - docker tag "duniter/duniter-v2s:$IMAGE_TAG" "duniter/duniter-v2s:debug-latest"
    - docker push "duniter/duniter-v2s:debug-latest"

deploy_docker_release_sha:
  extends: .docker-build-app-image
  rules:
    - if: $CI_COMMIT_TAG
      when: never
    - if: '$CI_MERGE_REQUEST_ID || $CI_COMMIT_BRANCH == "master"'
    - when: manual
  allow_failure: true
  variables:
    DOCKERFILE_PATH: "docker/Dockerfile"
    IMAGE_TAG: "sha-$CI_COMMIT_SHORT_SHA"
  dependencies:
    - build_release_manual

deploy_docker_release_tag:
  extends: .docker-build-app-image
  rules:
    - if: "$CI_COMMIT_TAG && $CI_COMMIT_TAG =~ /^v*/"
    - when: never
  variables:
    DOCKERFILE_PATH: "docker/Dockerfile"
    IMAGE_TAG: "$CI_COMMIT_TAG"
  after_script:
    - docker login -u "duniterteam" -p "$DUNITERTEAM_PASSWD"
    - docker tag "duniter/duniter-v2s:$IMAGE_TAG" "duniter/duniter-v2s:latest"
    - docker push "duniter/duniter-v2s:latest"
  dependencies:
    - build_release

readme_docker_release_tag:
  stage: deploy_readme
  rules:
    - if: "$CI_COMMIT_TAG && $CI_COMMIT_TAG =~ /^v*/"
    - when: never
  image:
    name: chko/docker-pushrm
    entrypoint: ["/bin/sh", "-c", "/docker-pushrm"]
  variables:
    DOCKER_USER: "duniterteam"
    DOCKER_PASS: "$DUNITERTEAM_PASSWD"
    PUSHRM_SHORT: "Duniter v2 based on Substrate framework"
    PUSHRM_TARGET: "docker.io/duniter/duniter-v2s"
    PUSHRM_DEBUG: 1
    PUSHRM_FILE: "$CI_PROJECT_DIR/docker/README.md"
  script: "/bin/true"
    DEBIAN_FRONTEND: noninteractive
  script:
    - export RUST_MIN_STACK=16777216 # 16MB stack size otherwise CI fail during LLVM's Thin LTO (Link Time Optimization) phase
    - cargo tu
    - cargo tf
    - cargo cucumber-build
    - cargo cucumber
Original line number Diff line number Diff line
@@ -29,7 +29,7 @@ USER duniter
# check if executable works in this container
RUN /usr/local/bin/duniter --version

EXPOSE 30333 9933 9944
EXPOSE 30333 9944
VOLUME ["/duniter"]

ENTRYPOINT ["/usr/local/bin/duniter"]
Original line number Diff line number Diff line
@@ -4,7 +4,7 @@
    100
  ],
  "[json]": {
    "editor.defaultFormatter": "esbenp.prettier-vscode"
    "editor.defaultFormatter": "vscode.json-language-features"
  },
  "[yaml]": {
    "editor.defaultFormatter": "esbenp.prettier-vscode"
@@ -14,5 +14,6 @@
    "port_p2p": 19931,
    "port_rpc": 19932,
    "port_ws": 19933
  }
  },
  "rust-analyzer.showUnlinkedFileNotification": false
}
 No newline at end of file
+11 −8
Original line number Diff line number Diff line
@@ -4,7 +4,7 @@ Before contributing, please make sure that your development environment is prope

[Setting up your development environment]

Sign-ups on our gitlab are disabled. If you would like to contribute, please ask for its creation on [the technical forum].
Sign-ups on our gitlab are disabled. If you would like to contribute, please ask for an account on [the technical forum].

When contributing to this repository, please first discuss the change you wish to make via issue or
via [the technical forum] before making a change.
@@ -13,13 +13,15 @@ Please note we have a specific workflow, please follow it in all your interactio

## Developer documentation

Please read [Developer documentation] before contribute.
Please read [Developer documentation] before contributing.

## Workflow

- If there is an unassigned issue about the thing you want to contribute to, assign the issue to yourself.
- Create a branch based on `master` and prefixed with your nickname. Give your branch a short name indicating the subject.
- Create an MR from your branch to `master`.
- Never contribute to a branch of another contributor! If the contributor makes a `git rebase` your commit will be lost!
- Create an MR from your branch to `master`. Prefix the title with `Draft: ` until it's ready to be merged.
- If the MR is related to an issue, mention the issue in the description using the `#42` syntax.
- Never push to a branch of another contributor! If the contributor makes a `git rebase` your commit will be lost!
- Before you push your commit:
  - Apply formatters (rustfmt and prettier) and linter (clippy)
  - Document your code
@@ -30,14 +32,15 @@ Please read [Developer documentation] before contribute.
1. Ensure you rebased your branch on the latest `master` commit to avoid any merge conflicts.
1. Ensure that you respect the [commit naming conventions].
1. Ensure that all automated tests pass with the `cargo test` command.
1. Ensure that the code is well formated `cargo fmt` and comply with the good practices `cargo clippy`. If you have been working on tests, check everything with `cargo clippy --all --tests`.
1. Ensure that the code is well formatted `cargo fmt` and complies with the good practices `cargo clippy`. If you have been working on tests, check everything with `cargo clippy --all --tests`.
1. Update the documentation with details of changes to the interface, this includes new environment variables, exposed ports, useful file locations and container parameters.
1. Push your branch on the gitlab and create a merge request. Briefly explain the purpose of your contribution in the description of the merge request.
1. Tag a Duniter reviewer so they will review your contribution. If you still have no news after several weeks, tag another reviewer or/and talk about your contribution on [the technical forum].
1. Mark the MR as ready (or remove the `Draft: ` prefix) only when you think it can be reviewed or merged.
1. Assign a Duniter reviewer so they will review your contribution. If you still have no news after several weeks, ask explicitly for a review, or tag another reviewer or/and talk about your contribution on [the technical forum].

## List of Duniter's reviewers

- @librelois
- @HugoTrentesaux
- @tuxmain

[commit naming conventions]: ./docs/dev/git-conventions.md#naming-commits
+11691 −6523

File changed.

Preview size limit exceeded, changes collapsed.

+208 −207

File changed.

Preview size limit exceeded, changes collapsed.

+40 −189
Original line number Diff line number Diff line
@@ -10,32 +10,46 @@
    <img alt="logov2" src="https://duniter.fr/img/duniterv2.svg" width="128" height="128"/>
</div>

## Documentation TOC
## Documentation

- [README](./README.md)
Multiple documentation sources are available depending on the level of detail you need.

- Full technical Rust doc (auto-generated with `cargo xtask gen-doc`) : https://doc-duniter-org.ipns.pagu.re/duniter/
- User and client developer doc (official website) : https://duniter.org/wiki/duniter-v2/
- Internal documentation (within git repository), see table of contents below : [./doc](./doc)

### Internal documentation TOC

- [README](./README.md) (this file)
  - [Use](#use)
  - [Test](#test)
  - [Contribute](#contribute)
  - [Structure](#project-structure)
- [docs](./docs/)
  - [api](./docs/api/)
    - [manual](./docs/api/manual.md)
  - [License](#license)
- [docs](./docs/) internal documentation
  - [api](./docs/api/) API
    - [manual](./docs/api/manual.md) manage account and identities
    - [runtime-calls](./docs/api/runtime-calls.md) the calls you can submit through the RPC API
  - [dev](./docs/dev/)
    - [runtime-errors](./docs/api/runtime-errors.md) the errors you can get submitting a call
    - [runtime-events](./docs/api/runtime-events.md) the events you can get submitting a call
  - [dev](./docs/dev/) developer documentation
    - [beginner-walkthrough](./docs/dev/beginner-walkthrough.md)
    - [git-conventions](./docs/dev/git-conventions.md)
    - [pallet_conventions](./docs/dev/pallet_conventions.md)
    - [launch-a-live-network](./docs/dev/launch-a-live-network.md)
    - [setup](./docs/dev/setup.md)
    - [compilation features](./docs/dev/compilation.md)
    - [verify-runtime-code](./docs/dev/verify-runtime-code.md)
    - [weights-benchmarking](./docs/dev/weights-benchmarking.md)
    - [upgrade-substrate](./docs/dev/upgrade-substrate.md)
  - [test](./docs/test/)
    - [replay-block](./docs/test/replay-block.md)
  - [user](./docs/user/)
  - [user](./docs/user/) user documentation
    - [autocompletion](./docs/user/autocompletion.md)
    - [build-for-arm](./docs/user/build-for-arm.md)
    - [rpc](./docs/user/rpc.md) deploy a permanent ǦDev mirror node
    - [smith](./docs/user/smith.md) deploy a permanent ǦDev validator node
    - [debian installation](./docs/user/installation_debian.md)
    - [distance](./docs/user/distance.md)
    - [fees](./docs/user/fees.md)
  - [packaging](./docs/packaging/) packaging
    - [build-for-arm](./docs/packaging/build-for-arm.md) build for ARM architecture
    - [build-debian](./docs/packaging/build-deb.md) build a native Debian package
- [docker](./docker/) docker-related documentation
- [end2end-tests](./end2end-tests/) automated end to end tests written with cucumber
- [live-tests](./live-tests/) sanity checks to test the storage of a live chain

@@ -45,24 +59,23 @@

The easiest way is to use the docker image.

Minimal command to deploy a **temporary** mirror peer:
Minimal command to deploy a temporary mirror peer:

```docker
docker run -it -p9944:9944 -e DUNITER_CHAIN_NAME=gdev duniter/duniter-v2s:v0.4.0 --tmp --execution=Wasm
docker run -it -p9944:9944 -e DUNITER_CHAIN_NAME=gdev duniter/duniter-v2s-gdev-800:latest
```

To go further, read [How to deploy a permanent mirror node on ĞDev network](./docs/user/rpc.md).
To go further, read [How to deploy a permanent mirror node on ĞDev network 🔗](https://duniter.org/wiki/duniter-v2/#run-a-mirror-node).

### Create your local blockchain

It can be useful to deploy your local blockchain, for instance to have a controlled environement
to develop/test an application that interacts with the blockchain.
It can be useful to deploy your local blockchain, for instance to have a controlled environment to develop/test an application that interacts with the blockchain.

```docker
docker run -it -p9944:9944 duniter/duniter-v2s:v0.4.0 --tmp
docker run -it -p9944:9944 duniter/duniter-v2s-gdev-800:latest
```

Or use the `docker-compose.yml` at the root of this repository.
Or use the [`docker-compose.yml`](./docker-compose.yml) at the root of this repository.

#### Control when your local blockchain should produce blocks

@@ -73,34 +86,9 @@ You can decide when to produce blocks with the cli option `--sealing` which has
- `--sealing=instant`: produce a block immediately upon receiving a transaction into the transaction pool
- `--sealing=manual`: produce a block upon receiving an RPC request (method `engine_createBlock`).

### Autocompletion

See [autocompletion](./docs/user/autocompletion.md).

## Test

### Test a specific commit

At each commit on master, an image with the tag `debug-sha-********` is published, where `********`
corresponds to the first 8 hash characters of the commit.

Usage:

```docker
docker run -it -p9944:9944 --name duniter-v2s duniter/duniter-v2s:debug-sha-b836f1a6
```

Then open `https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944` in a browser.

Enable detailed logging:
### Shell autocompletion

```docker
docker run -it -p9944:9944 --name duniter-v2s \
  -e RUST_LOG=debug \
  -e RUST_BACKTRACE=1 \
  -lruntime=debug \
  duniter/duniter-v2s:debug-sha-b836f1a6
```
See [autocompletion](./docs/user/autocompletion.md) to generate shell autocompletion for duniter commands.

## Contribute

@@ -127,20 +115,11 @@ cargo build
Use Rust's native `cargo` command to build and launch the node:

```sh
cargo run -- --dev --tmp
cargo run -- --dev
```

This will deploy a local blockchain with test accounts (Alice, Bob, etc) in the genesis.

## Single-Node Development Chain

This command will start the single-node development chain with persistent state:

```bash
./target/debug/duniter --dev --tmp
```

Then open `https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944` in a browser.
Open `https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944` to watch and interact with your node.

Start the development chain with detailed logging:

@@ -148,140 +127,11 @@ Start the development chain with detailed logging:
RUST_LOG=debug RUST_BACKTRACE=1 ./target/debug/duniter -lruntime=debug --dev
```

## Multi-Node Local Testnet

If you want to see the multi-node consensus algorithm in action, refer to
[our Start a Private Network tutorial](https://substrate.dev/docs/en/tutorials/start-a-private-network/).

### Purge previous local testnet

```
./target/debug/duniter purge-chain --base-path /tmp/alice --chain local
./target/debug/duniter purge-chain --base-path /tmp/bob --chain local

```

### Start Alice's node

```bash
./target/debug/duniter \
  --base-path /tmp/alice \
  --chain local \
  --alice \
  --port 30333 \
  --ws-port 9945 \
  --rpc-port 9933 \
  --node-key 0000000000000000000000000000000000000000000000000000000000000001 \
  --validator
```

### Start Bob's node
## License

```bash
./target/debug/duniter \
  --base-path /tmp/bob \
  --chain local \
  --bob \
  --port 30334 \
  --ws-port 9946 \
  --rpc-port 9934 \
  --validator \
  --bootnodes /ip4/127.0.0.1/tcp/30333/p2p/12D3KooWEyoppNCUx8Yx66oV9fJnriXwCcXwDDUA2kj6vnc6iDEp
```
See [LICENSE](./LICENSE)

## Project Structure

A Substrate project such as this consists of a number of components that are spread across a few
directories.

### Node

A blockchain node is an application that allows users to participate in a blockchain network.
Substrate-based blockchain nodes expose a number of capabilities:

- Networking: Substrate nodes use the [`libp2p`](https://libp2p.io/) networking stack to allow the
  nodes in the network to communicate with one another.
- Consensus: Blockchains must have a way to come to
  [consensus](https://substrate.dev/docs/en/knowledgebase/advanced/consensus) on the state of the
  network. Substrate makes it possible to supply custom consensus engines and also ships with
  several consensus mechanisms that have been built on top of
  [Web3 Foundation research](https://research.web3.foundation/en/latest/polkadot/NPoS/index.html).
- RPC Server: A remote procedure call (RPC) server is used to interact with Substrate nodes.

There are several files in the `node` directory - take special note of the following:

- [`chain_spec.rs`](./node/src/chain_spec.rs): A
  [chain specification](https://substrate.dev/docs/en/knowledgebase/integrate/chain-spec) is a
  source code file that defines a Substrate chain's initial (genesis) state. Chain specifications
  are useful for development and testing, and critical when architecting the launch of a
  production chain. Take note of the `development_chain_spec` and `testnet_genesis` functions, which
  are used to define the genesis state for the local development chain configuration. These
  functions identify some
  [well-known accounts](https://substrate.dev/docs/en/knowledgebase/integrate/subkey#well-known-keys)
  and use them to configure the blockchain's initial state.
- [`service.rs`](./node/src/service.rs): This file defines the node implementation. Take note of
  the libraries that this file imports and the names of the functions it invokes. In particular,
  there are references to consensus-related topics, such as the
  [longest chain rule](https://substrate.dev/docs/en/knowledgebase/advanced/consensus#longest-chain-rule),
  the [Babe](https://substrate.dev/docs/en/knowledgebase/advanced/consensus#babe) block authoring
  mechanism and the
  [GRANDPA](https://substrate.dev/docs/en/knowledgebase/advanced/consensus#grandpa) finality
  gadget.

After the node has been [built](#build), refer to the embedded documentation to learn more about the
capabilities and configuration parameters that it exposes:

```shell
./target/debug/duniter --help
```

### Runtime

In Substrate, the terms
"[runtime](https://substrate.dev/docs/en/knowledgebase/getting-started/glossary#runtime)" and
"[state transition function](https://substrate.dev/docs/en/knowledgebase/getting-started/glossary#stf-state-transition-function)"
are analogous - they refer to the core logic of the blockchain that is responsible for validating
blocks and executing the state changes they define. The Substrate project in this repository uses
the [FRAME](https://substrate.dev/docs/en/knowledgebase/runtime/frame) framework to construct a
blockchain runtime. FRAME allows runtime developers to declare domain-specific logic in modules
called "pallets". At the heart of FRAME is a helpful
[macro language](https://substrate.dev/docs/en/knowledgebase/runtime/macros) that makes it easy to
create pallets and flexibly compose them to create blockchains that can address
[a variety of needs](https://www.substrate.io/substrate-users/).

Review the [FRAME runtime implementation](./runtime/src/lib.rs) included in this template and note
the following:

- This file configures several pallets to include in the runtime. Each pallet configuration is
  defined by a code block that begins with `impl $PALLET_NAME::Config for Runtime`.
- The pallets are composed into a single runtime by way of the
  [`construct_runtime!`](https://crates.parity.io/frame_support/macro.construct_runtime.html)
  macro, which is part of the core
  [FRAME Support](https://substrate.dev/docs/en/knowledgebase/runtime/frame#support-library)
  library.

### Pallets

The runtime in this project is constructed using many FRAME pallets that ship with the
[core Substrate repository](https://github.com/paritytech/substrate/tree/master/frame) and a
template pallet that is [defined in the `pallets`](./pallets/template/src/lib.rs) directory.

A FRAME pallet is compromised of a number of blockchain primitives:

- Storage: FRAME defines a rich set of powerful
  [storage abstractions](https://substrate.dev/docs/en/knowledgebase/runtime/storage) that makes
  it easy to use Substrate's efficient key-value database to manage the evolving state of a
  blockchain.
- Dispatchables: FRAME pallets define special types of functions that can be invoked (dispatched)
  from outside of the runtime in order to update its state.
- Events: Substrate uses [events](https://substrate.dev/docs/en/knowledgebase/runtime/events) to
  notify users of important changes in the runtime.
- Errors: When a dispatchable fails, it returns an error.
- Config: The `Config` configuration interface is used to define the types and parameters upon
  which a FRAME pallet depends.

## License

CopyLeft 2021-2023 Axiom-Team

Some parts borrowed from Polkadot (Parity Technologies (UK) Ltd.)
@@ -297,3 +147,4 @@ GNU Affero General Public License for more details.

You should have received a copy of the GNU Affero General Public License
along with Duniter-v2S. If not, see <https://www.gnu.org/licenses/>.
```
+42 −0
Original line number Diff line number Diff line
[package]
edition.workspace = true
homepage.workspace = true
license.workspace = true
description = "Duniter client distance"
name = "dc-distance"
readme = "README.md"
version = "1.0.0"
repository.workspace = true

[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

[features]
std = [
	"frame-support/std",
	"pallet-distance/std",
	"sp-core/std",
	"sp-distance/std",
	"sp-runtime/std",
]
runtime-benchmarks = [
	"frame-support/runtime-benchmarks",
	"pallet-distance/runtime-benchmarks",
	"sp-runtime/runtime-benchmarks",
]
try-runtime = [
	"frame-support/try-runtime",
	"pallet-distance/try-runtime",
	"sp-distance/try-runtime",
	"sp-runtime/try-runtime",
]

[dependencies]
frame-support = { workspace = true }
log = { workspace = true }
pallet-distance = { workspace = true }
sc-client-api = { workspace = true }
sp-core = { workspace = true }
sp-distance = { workspace = true }
sp-runtime = { workspace = true }
thiserror = { workspace = true }
+3 −0
Original line number Diff line number Diff line
# Distance Oracle Inherent Data Provider

You can find the autogenerated documentation at: [https://doc-duniter-org.ipns.pagu.re/dc_distance/index.html](https://doc-duniter-org.ipns.pagu.re/dc_distance/index.html).
+189 −0
Original line number Diff line number Diff line
// Copyright 2022 Axiom-Team
//
// This file is part of Substrate-Libre-Currency.
//
// Substrate-Libre-Currency is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, version 3 of the License.
//
// Substrate-Libre-Currency is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with Substrate-Libre-Currency. If not, see <https://www.gnu.org/licenses/>.

//! # Distance Oracle Inherent Data Provider
//!
//! This crate provides functionality for creating an **inherent data provider**
//! specifically designed for the "Distance Oracle".
//! The inherent data provider is responsible for fetching and delivering
//! computation results required for the runtime to process distance evaluations.
//!
//! ## Relationship with Distance Oracle
//!
//! The **distance-oracle** is responsible for computing distance evaluations,
//! storing the results to be read in the next period, and saving them to files.
//! These files are then read by **this inherent data provider**
//! to provide the required data to the runtime.
//!
//! ## Overview
//!
//! - Retrieves **period index** and **evaluation results** from the storage and file system.
//! - Determines whether the computation results for the current period have already been published.
//! - Reads and parses evaluation result files when available, providing the necessary data to the runtime.

use frame_support::pallet_prelude::*;
use sc_client_api::{ProvideUncles, StorageKey, StorageProvider};
use sp_runtime::{AccountId32, generic::BlockId, traits::Block as BlockT};
use std::path::PathBuf;

/// The file version that should match the distance oracle one.
/// This ensures that the smith avoids accidentally submitting invalid data
/// in case there are changes in logic between the runtime and the oracle,
/// thereby preventing potential penalties.
const VERSION_PREFIX: &str = "001-";

type IdtyIndex = u32;

#[derive(Debug, thiserror::Error)]
pub enum Error<B: BlockT> {
    #[error("Could not retrieve the block hash for block id: {0:?}")]
    NoHashForBlockId(BlockId<B>),
}

/// Create a new [`sp_distance::InherentDataProvider`] at the given block.
pub fn create_distance_inherent_data_provider<B, C, Backend>(
    client: &C,
    parent: B::Hash,
    distance_dir: PathBuf,
    owner_keys: &[sp_core::sr25519::Public],
) -> sp_distance::InherentDataProvider<IdtyIndex>
where
    B: BlockT,
    C: ProvideUncles<B> + StorageProvider<B, Backend>,
    Backend: sc_client_api::Backend<B>,
    IdtyIndex: Decode + Encode + PartialEq + TypeInfo,
{
    // Retrieve the period_index from storage.
    let period_index = client
        .storage(
            parent,
            &StorageKey(
                frame_support::storage::storage_prefix(b"Distance", b"CurrentPeriodIndex").to_vec(),
            ),
        )
        .ok()
        .flatten()
        .and_then(|raw| u32::decode(&mut &raw.0[..]).ok());

    // Return early if the storage is inaccessible or the data is corrupted.
    let period_index = match period_index {
        Some(index) => index,
        None => {
            log::error!("🧙 [distance inherent] PeriodIndex decoding failed.");
            return sp_distance::InherentDataProvider::<IdtyIndex>::new(None);
        }
    };

    // Retrieve the published_results from storage.
    let published_results = client
        .storage(
            parent,
            &StorageKey(
                frame_support::storage::storage_prefix(
                    b"Distance",
                    match period_index % 3 {
                        0 => b"EvaluationPool0",
                        1 => b"EvaluationPool1",
                        2 => b"EvaluationPool2",
                        _ => unreachable!("n<3"),
                    },
                )
                .to_vec(),
            ),
        )
        .ok()
        .flatten()
        .and_then(|raw| {
            pallet_distance::EvaluationPool::<AccountId32, IdtyIndex>::decode(&mut &raw.0[..]).ok()
        });

    // Return early if the storage is inaccessible or the data is corrupted.
    let published_results = match published_results {
        Some(published_results) => published_results,
        None => {
            log::info!("🧙 [distance inherent] No published result at this block.");
            return sp_distance::InherentDataProvider::<IdtyIndex>::new(None);
        }
    };

    // Find the account associated with the BABE key that is in our owner keys.
    let mut local_account = None;
    for key in owner_keys {
        // Session::KeyOwner is StorageMap<_, Twox64Concat, (KeyTypeId, Vec<u8>), AccountId32, OptionQuery>
        // Slices (variable length) and array (fixed length) are encoded differently, so the `.as_slice()` is needed
        let item_key = (sp_runtime::KeyTypeId(*b"babe"), key.0.as_slice()).encode();
        let mut storage_key =
            frame_support::storage::storage_prefix(b"Session", b"KeyOwner").to_vec();
        storage_key.extend_from_slice(&sp_core::twox_64(&item_key));
        storage_key.extend_from_slice(&item_key);

        if let Some(raw_data) = client
            .storage(parent, &StorageKey(storage_key))
            .ok()
            .flatten()
        {
            if let Ok(key_owner) = AccountId32::decode(&mut &raw_data.0[..]) {
                local_account = Some(key_owner);
                break;
            } else {
                log::warn!("🧙 [distance inherent] Cannot decode key owner value");
            }
        }
    }

    // Have we already published a result for this period?
    if let Some(local_account) = local_account {
        if published_results.evaluators.contains(&local_account) {
            log::debug!("🧙 [distance inherent] Already published a result for this period");
            return sp_distance::InherentDataProvider::<IdtyIndex>::new(None);
        }
    } else {
        log::error!("🧙 [distance inherent] Cannot find our BABE owner key");
        return sp_distance::InherentDataProvider::<IdtyIndex>::new(None);
    }

    // Read evaluation result from file, if it exists
    log::debug!(
        "🧙 [distance inherent] Reading evaluation result from file {:?}",
        distance_dir.clone().join(period_index.to_string())
    );
    let evaluation_result = match std::fs::read(
        distance_dir.join(VERSION_PREFIX.to_owned() + &period_index.to_string()),
    ) {
        Ok(data) => data,
        Err(e) => {
            match e.kind() {
                std::io::ErrorKind::NotFound => {
                    log::debug!(
                        "🧙 [distance inherent] Evaluation result file not found. Please ensure that the oracle version matches {}",
                        VERSION_PREFIX
                    );
                }
                _ => {
                    log::error!(
                        "🧙 [distance inherent] Cannot read distance evaluation result file: {e:?}"
                    );
                }
            }
            return sp_distance::InherentDataProvider::<IdtyIndex>::new(None);
        }
    };

    log::info!("🧙 [distance inherent] Providing evaluation result");
    sp_distance::InherentDataProvider::<IdtyIndex>::new(Some(
        sp_distance::ComputationResult::decode(&mut evaluation_result.as_slice()).unwrap(),
    ))
}
+52 −0
Original line number Diff line number Diff line
[package]
name = "distance-oracle"
version = "0.1.0"
authors.workspace = true
repository.workspace = true
license.workspace = true
edition.workspace = true

[[bin]]
name = "distance-oracle"
required-features = ["standalone"]

[features]
default = ["gdev", "standalone", "std"]
gdev = []
gtest = []
g1 = []
# Feature standalone is for CLI executable
standalone = ["clap", "tokio"]
# Feature std is needed
std = [
	"codec/std",
	"fnv/std",
	"sp-core/std",
	"sp-distance/std",
	"sp-runtime/std",
]
try-runtime = ["sp-distance/try-runtime", "sp-runtime/try-runtime"]

[dependencies]
clap = { workspace = true, features = ["derive"], optional = true }
codec = { workspace = true }
fnv = { workspace = true }
log = { workspace = true }
rayon = { workspace = true }
simple_logger = { workspace = true }
sp-core = { workspace = true }
sp-distance = { workspace = true }
sp-runtime = { workspace = true }
subxt = { workspace = true, features = [
	"native",
	"jsonrpsee",
] }
tokio = { workspace = true, features = [
	"rt-multi-thread",
	"macros",
], optional = true }

[dev-dependencies]
bincode = { workspace = true }
dubp-wot = { workspace = true }
flate2 = { workspace = true, features = ["zlib-ng-compat"] }
+3 −0
Original line number Diff line number Diff line
# Distance Oracle

You can find the autogenerated documentation at: [https://doc-duniter-org.ipns.pagu.re/distance_oracle/index.html](https://doc-duniter-org.ipns.pagu.re/distance_oracle/index.html).
+176 −0
Original line number Diff line number Diff line
// Copyright 2023 Axiom-Team
//
// This file is part of Duniter-v2S.
//
// Duniter-v2S is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, version 3 of the License.
//
// Duniter-v2S is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with Duniter-v2S. If not, see <https://www.gnu.org/licenses/>.

#![allow(clippy::type_complexity)]

use crate::runtime;
use log::debug;

pub type Client = subxt::OnlineClient<crate::RuntimeConfig>;
pub type AccountId = subxt::utils::AccountId32;
pub type IdtyIndex = u32;
pub type EvaluationPool =
    runtime::runtime_types::pallet_distance::types::EvaluationPool<AccountId, IdtyIndex>;
pub type H256 = subxt::utils::H256;

pub async fn client(rpc_url: impl AsRef<str>) -> Client {
    Client::from_insecure_url(rpc_url)
        .await
        .expect("Cannot create RPC client")
}

pub async fn parent_hash(client: &Client) -> H256 {
    client
        .blocks()
        .at_latest()
        .await
        .expect("Cannot fetch latest block hash")
        .hash()
}

pub async fn current_period_index(client: &Client, parent_hash: H256) -> u32 {
    client
        .storage()
        .at(parent_hash)
        .fetch(&runtime::storage().distance().current_period_index())
        .await
        .expect("Cannot fetch current pool index")
        .unwrap_or_default()
}

pub async fn current_pool(
    client: &Client,
    parent_hash: H256,
    current_pool_index: u32,
) -> Option<EvaluationPool> {
    client
        .storage()
        .at(parent_hash)
        .fetch(&match current_pool_index {
            0 => {
                debug!("Looking at Pool1 for pool index {}", current_pool_index);
                runtime::storage().distance().evaluation_pool1()
            }
            1 => {
                debug!("Looking at Pool2 for pool index {}", current_pool_index);
                runtime::storage().distance().evaluation_pool2()
            }
            2 => {
                debug!("Looking at Pool0 for pool index {}", current_pool_index);
                runtime::storage().distance().evaluation_pool0()
            }
            _ => unreachable!("n<3"),
        })
        .await
        .expect("Cannot fetch current pool")
}

pub async fn evaluation_block(client: &Client, parent_hash: H256) -> H256 {
    client
        .storage()
        .at(parent_hash)
        .fetch(&runtime::storage().distance().evaluation_block())
        .await
        .expect("Cannot fetch evaluation block")
        .expect("No evaluation block")
}

pub async fn max_referee_distance(client: &Client) -> u32 {
    client
        .constants()
        .at(&runtime::constants().distance().max_referee_distance())
        .expect("Cannot fetch referee distance")
}

pub async fn member_iter(client: &Client, evaluation_block: H256) -> MemberIter {
    MemberIter(
        client
            .storage()
            .at(evaluation_block)
            .iter(runtime::storage().membership().membership_iter())
            .await
            .expect("Cannot fetch memberships"),
    )
}

pub struct MemberIter(
    subxt::backend::StreamOfResults<
        subxt::storage::StorageKeyValuePair<
            subxt::storage::StaticAddress<
                (),
                runtime::runtime_types::sp_membership::MembershipData<u32>,
                (),
                (),
                subxt::utils::Yes,
            >,
        >,
    >,
);

impl MemberIter {
    pub async fn next(&mut self) -> Result<Option<IdtyIndex>, subxt::error::Error> {
        self.0
            .next()
            .await
            .transpose()
            .map(|i| i.map(|j| idty_id_from_storage_key(&j.key_bytes)))
    }
}

pub async fn cert_iter(client: &Client, evaluation_block: H256) -> CertIter {
    CertIter(
        client
            .storage()
            .at(evaluation_block)
            .iter(runtime::storage().certification().certs_by_receiver_iter())
            .await
            .expect("Cannot fetch certifications"),
    )
}

pub struct CertIter(
    subxt::backend::StreamOfResults<
        subxt::storage::StorageKeyValuePair<
            subxt::storage::StaticAddress<
                (),
                Vec<(u32, u32)>,
                (),
                subxt::utils::Yes,
                subxt::utils::Yes,
            >,
        >,
    >,
);

impl CertIter {
    pub async fn next(
        &mut self,
    ) -> Result<Option<(IdtyIndex, Vec<(IdtyIndex, u32)>)>, subxt::error::Error> {
        self.0
            .next()
            .await
            .transpose()
            .map(|i| i.map(|j| (idty_id_from_storage_key(&j.key_bytes), j.value)))
    }
}

fn idty_id_from_storage_key(storage_key: &[u8]) -> IdtyIndex {
    u32::from_le_bytes(
        storage_key[40..44]
            .try_into()
            .expect("Cannot convert StorageKey to IdtyIndex"),
    )
}
+422 −0
Original line number Diff line number Diff line
// Copyright 2023 Axiom-Team
//
// This file is part of Duniter-v2S.
//
// Duniter-v2S is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, version 3 of the License.
//
// Duniter-v2S is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with Duniter-v2S. If not, see <https://www.gnu.org/licenses/>.

//! # Distance Oracle
//!
//! The **Distance Oracle** is a standalone program designed to calculate the distances between identities in the Duniter Web of Trust (WoT). This process is computationally intensive and is therefore decoupled from the main runtime. It allows smith users to choose whether to run the oracle and provide results to the network.
//!
//! The **oracle** works in conjunction with the **Inherent Data Provider** and the **Distance Pallet** in the runtime to deliver periodic computation results. The **Inherent Data Provider** fetches and supplies these results to the runtime, ensuring that the necessary data for distance evaluations is available to be processed at the appropriate time in the runtime lifecycle.
//!
//! ## Structure
//!
//! The Distance Oracle is organized into the following modules:
//!
//! 1. **`/distance-oracle/`**: Contains the main binary for executing the distance computation.
//! 2. **`/primitives/distance/`**: Defines primitive types shared between the client and runtime.
//! 3. **`/client/distance/`**: Exposes the `create_distance_inherent_data_provider`, which feeds data into the runtime through the Inherent Data Provider.
//! 4. **`/pallets/distance/`**: A pallet that handles distance-related types, traits, storage, and hooks in the runtime, coordinating the interaction between the oracle, inherent data provider, and runtime.
//!
//! ## How it works
//! - The **Distance Pallet** adds an evaluation request at period `i` in the runtime.
//! - The **Distance Oracle** evaluates this request at period `i + 1`, computes the necessary results and stores them on disk.
//! - The **Inherent Data Provider** reads this evaluation result from disk at period `i + 2` and provides it to the runtime to perform the required operations.
//!
//! ## Usage
//!
//! ### Docker Integration
//!
//! To run the Distance Oracle, use the provided Docker setup. Refer to the [docker-compose.yml](../docker-compose.yml) file for an example configuration.
//!
//! Example Output:
//! ```text
//! 2023-12-09T14:45:05.942Z INFO [distance_oracle] Nothing to do: Pool does not exist
//! Waiting 1800 seconds before next execution...
//! ```

#[cfg(not(test))]
pub mod api;
#[cfg(test)]
pub mod mock;
#[cfg(test)]
mod tests;

#[cfg(test)]
pub use mock as api;

use api::{AccountId, EvaluationPool, H256, IdtyIndex};

use codec::Encode;
use fnv::{FnvHashMap, FnvHashSet};
use log::{debug, info, warn};
use rayon::iter::{IntoParallelRefIterator, ParallelIterator};
use std::{io::Write, path::PathBuf};

/// The file version must match the version used by the inherent data provider.
/// This ensures that the smith avoids accidentally submitting invalid data
/// in case there are changes in logic between the runtime and the oracle,
/// thereby preventing potential penalties.
const VERSION_PREFIX: &str = "001-";

#[cfg(feature = "gdev")]
#[subxt::subxt(runtime_metadata_path = "../resources/gdev_metadata.scale")]
pub mod runtime {}
#[cfg(feature = "gtest")]
#[subxt::subxt(runtime_metadata_path = "../resources/gtest_metadata.scale")]
pub mod runtime {}
#[cfg(feature = "g1")]
#[subxt::subxt(runtime_metadata_path = "../resources/g1_metadata.scale")]
pub mod runtime {}

pub enum RuntimeConfig {}
impl subxt::config::Config for RuntimeConfig {
    type AccountId = AccountId;
    type Address = sp_runtime::MultiAddress<Self::AccountId, u32>;
    type AssetId = ();
    type ExtrinsicParams = subxt::config::substrate::SubstrateExtrinsicParams<Self>;
    type Hasher = subxt::config::substrate::BlakeTwo256;
    type Header =
        subxt::config::substrate::SubstrateHeader<u32, subxt::config::substrate::BlakeTwo256>;
    type Signature = sp_runtime::MultiSignature;
}

/// Represents a tipping amount.
#[derive(Copy, Clone, Debug, Default, Encode)]
pub struct Tip {
    #[codec(compact)]
    tip: u64,
}

impl Tip {
    pub fn new(amount: u64) -> Self {
        Tip { tip: amount }
    }
}

impl From<u64> for Tip {
    fn from(n: u64) -> Self {
        Self::new(n)
    }
}

/// Represents configuration parameters.
pub struct Settings {
    pub evaluation_result_dir: PathBuf,
    pub rpc_url: String,
}

impl Default for Settings {
    fn default() -> Self {
        Self {
            evaluation_result_dir: PathBuf::from("/tmp/duniter/chains/gdev/distance"),
            rpc_url: String::from("ws://127.0.0.1:9944"),
        }
    }
}

/// Runs the evaluation process, saves the results, and cleans up old files.
///
/// This function performs the following steps:
/// 1. Runs the evaluation task by invoking `compute_distance_evaluation`, which provides:
///    - The evaluation results.
///    - The current period index.
///    - The file path where the results should be stored.
/// 2. Saves the evaluation results to a file in the specified directory.
/// 3. Cleans up outdated evaluation files.
pub async fn run(client: &api::Client, settings: &Settings) {
    let Some((evaluation, current_period_index, evaluation_result_path)) =
        compute_distance_evaluation(client, settings).await
    else {
        return;
    };

    debug!("Saving distance evaluation result to file `{evaluation_result_path:?}`");
    let mut evaluation_result_file = std::fs::OpenOptions::new()
        .write(true)
        .create_new(true)
        .open(&evaluation_result_path)
        .unwrap_or_else(|e| {
            panic!(
                "Cannot open distance evaluation result file `{evaluation_result_path:?}`: {e:?}"
            )
        });
    evaluation_result_file
        .write_all(
            &sp_distance::ComputationResult {
                distances: evaluation,
            }
            .encode(),
        )
        .unwrap_or_else(|e| {
            panic!(
                "Cannot write distance evaluation result to file `{evaluation_result_path:?}`: {e:?}"
            )
        });

    // When a new result is written, remove old results except for the current period used by the inherent logic and the next period that was just generated.
    settings
        .evaluation_result_dir
        .read_dir()
        .unwrap_or_else(|e| {
            panic!(
                "Cannot read distance evaluation result directory `{:?}`: {:?}",
                settings.evaluation_result_dir, e
            )
        })
        .flatten()
        .filter_map(|entry| {
            entry
                .file_name()
                .to_str()
                .and_then(|name| {
                    name.split('-')
                        .next_back()?
                        .parse::<u32>()
                        .ok()
                        .filter(|&pool| {
                            pool != current_period_index && pool != current_period_index + 1
                        })
                })
                .map(|_| entry.path())
        })
        .for_each(|path| {
            std::fs::remove_file(&path)
                .unwrap_or_else(|e| warn!("Cannot remove file `{:?}`: {:?}", path, e));
        });
}

/// Evaluates distance for the current period and prepares results for storage.
///
/// This function performs the following steps:
/// 1. Prepares the evaluation context using `prepare_evaluation_context`. If the context is not
///    ready (e.g., no pending evaluations, or results already exist), it returns `None`.
/// 2. Evaluates distances for all identities in the evaluation pool.
/// 3. Returns the evaluation results, the current period index, and the path to store the results.
///
pub async fn compute_distance_evaluation(
    client: &api::Client,
    settings: &Settings,
) -> Option<(Vec<sp_runtime::Perbill>, u32, PathBuf)> {
    let (evaluation_block, current_period_index, evaluation_pool, evaluation_result_path) =
        prepare_evaluation_context(client, settings).await?;

    info!("Evaluating distance for period {}", current_period_index);

    let max_depth = api::max_referee_distance(client).await;

    // member idty -> issued certs
    let mut members = FnvHashMap::<IdtyIndex, u32>::default();

    let mut members_iter = api::member_iter(client, evaluation_block).await;
    while let Some(member_idty) = members_iter
        .next()
        .await
        .expect("Cannot fetch next members")
    {
        members.insert(member_idty, 0);
    }

    let min_certs_for_referee = (members.len() as f32).powf(1. / (max_depth as f32)).ceil() as u32;

    // idty -> received certs
    let mut received_certs = FnvHashMap::<IdtyIndex, Vec<IdtyIndex>>::default();

    let mut certs_iter = api::cert_iter(client, evaluation_block).await;
    while let Some((receiver, issuers)) = certs_iter
        .next()
        .await
        .expect("Cannot fetch next certification")
    {
        if (issuers.len() as u32) < min_certs_for_referee {
            // This member is not referee
            members.remove(&receiver);
        }
        for (issuer, _removable_on) in issuers.iter() {
            if let Some(issued_certs) = members.get_mut(issuer) {
                *issued_certs += 1;
            }
        }
        received_certs.insert(
            receiver,
            issuers
                .into_iter()
                .map(|(issuer, _removable_on)| issuer)
                .collect(),
        );
    }

    // Only retain referees
    members.retain(|_idty, issued_certs| *issued_certs >= min_certs_for_referee);
    let referees = members;

    let evaluation = evaluation_pool
        .evaluations
        .0
        .as_slice()
        .par_iter()
        .map(|(idty, _)| distance_rule(&received_certs, &referees, max_depth, *idty))
        .collect();

    Some((evaluation, current_period_index, evaluation_result_path))
}

/// Prepares the context for the next evaluation task.
///
/// This function performs the following steps:
/// 1. Fetches the parent hash of the latest block from the API.
/// 2. Determines the current period index.
/// 3. Retrieves the evaluation pool for the current period.
///    - If the pool does not exist or is empty, it returns `None`.
/// 4. Checks if the evaluation result file for the next period already exists.
///    - If it exists, the task has already been completed, so the function returns `None`.
/// 5. Ensures the evaluation result directory is available, creating it if necessary.
/// 6. Retrieves the block number of the evaluation.
///
async fn prepare_evaluation_context(
    client: &api::Client,
    settings: &Settings,
) -> Option<(H256, u32, EvaluationPool, PathBuf)> {
    let parent_hash = api::parent_hash(client).await;

    let current_period_index = api::current_period_index(client, parent_hash).await;

    // Fetch the pending identities
    let Some(evaluation_pool) =
        api::current_pool(client, parent_hash, current_period_index % 3).await
    else {
        info!("Nothing to do: Pool does not exist");
        return None;
    };

    // Stop if nothing to evaluate
    if evaluation_pool.evaluations.0.is_empty() {
        info!("Nothing to do: Pool is empty");
        return None;
    }

    // The result is saved in a file named `current_period_index + 1`.
    // It will be picked up during the next period by the inherent.
    let evaluation_result_path = settings
        .evaluation_result_dir
        .join(VERSION_PREFIX.to_owned() + &(current_period_index + 1).to_string());

    // Stop if already evaluated
    if evaluation_result_path
        .try_exists()
        .expect("Result path unavailable")
    {
        info!("Nothing to do: File already exists");
        return None;
    }

    #[cfg(not(test))]
    std::fs::create_dir_all(&settings.evaluation_result_dir).unwrap_or_else(|e| {
        panic!(
            "Cannot create distance evaluation result directory `{0:?}`: {e:?}",
            settings.evaluation_result_dir
        );
    });

    Some((
        api::evaluation_block(client, parent_hash).await,
        current_period_index,
        evaluation_pool,
        evaluation_result_path,
    ))
}

/// Recursively explores the certification graph to identify referees accessible within a given depth.
fn distance_rule_recursive(
    received_certs: &FnvHashMap<IdtyIndex, Vec<IdtyIndex>>,
    referees: &FnvHashMap<IdtyIndex, u32>,
    idty: IdtyIndex,
    accessible_referees: &mut FnvHashSet<IdtyIndex>,
    known_idties: &mut FnvHashMap<IdtyIndex, u32>,
    depth: u32,
) {
    // Do not re-explore identities that have already been explored at least as deeply
    match known_idties.entry(idty) {
        std::collections::hash_map::Entry::Occupied(mut entry) => {
            if *entry.get() >= depth {
                return;
            } else {
                *entry.get_mut() = depth;
            }
        }
        std::collections::hash_map::Entry::Vacant(entry) => {
            entry.insert(depth);
        }
    }

    // If referee, add it to the list
    if referees.contains_key(&idty) {
        accessible_referees.insert(idty);
    }

    // If reached the maximum distance, stop exploring
    if depth == 0 {
        return;
    }

    // Explore certifiers
    for &certifier in received_certs.get(&idty).unwrap_or(&vec![]).iter() {
        distance_rule_recursive(
            received_certs,
            referees,
            certifier,
            accessible_referees,
            known_idties,
            depth - 1,
        );
    }
}

/// Calculates the fraction of accessible referees to total referees for a given identity.
fn distance_rule(
    received_certs: &FnvHashMap<IdtyIndex, Vec<IdtyIndex>>,
    referees: &FnvHashMap<IdtyIndex, u32>,
    depth: u32,
    idty: IdtyIndex,
) -> sp_runtime::Perbill {
    debug!("Evaluating distance for idty {}", idty);
    let mut accessible_referees =
        FnvHashSet::<IdtyIndex>::with_capacity_and_hasher(referees.len(), Default::default());
    let mut known_idties =
        FnvHashMap::<IdtyIndex, u32>::with_capacity_and_hasher(referees.len(), Default::default());
    distance_rule_recursive(
        received_certs,
        referees,
        idty,
        &mut accessible_referees,
        &mut known_idties,
        depth,
    );
    let result = if referees.contains_key(&idty) {
        sp_runtime::Perbill::from_rational(
            accessible_referees.len() as u32 - 1,
            referees.len() as u32 - 1,
        )
    } else {
        sp_runtime::Perbill::from_rational(accessible_referees.len() as u32, referees.len() as u32)
    };
    info!(
        "Distance for idty {}: {}/{} = {}%",
        idty,
        accessible_referees.len(),
        referees.len(),
        result.deconstruct() as f32 / 1_000_000_000f32 * 100f32
    );
    result
}
+60 −0
Original line number Diff line number Diff line
// Copyright 2023-2024 Axiom-Team
//
// This file is part of Duniter-v2S.
//
// Duniter-v2S is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, version 3 of the License.
//
// Duniter-v2S is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with Duniter-v2S. If not, see <https://www.gnu.org/licenses/>.

use clap::Parser;

#[derive(Debug, clap::Parser)]
struct Cli {
    #[clap(short = 'd', long, default_value = "/tmp/duniter/chains/gdev/distance")]
    evaluation_result_dir: String,
    /// Number of seconds between two evaluations (oneshot if absent)
    #[clap(short = 'i', long)]
    interval: Option<u64>,
    /// Node used for fetching state
    #[clap(short = 'u', long, default_value = "ws://127.0.0.1:9944")]
    rpc_url: String,
    /// Log level (off, error, warn, info, debug, trace)
    #[clap(short = 'l', long, default_value = "info")]
    log: log::LevelFilter,
}

#[tokio::main]
async fn main() {
    let cli = Cli::parse();

    simple_logger::SimpleLogger::new()
        .with_level(cli.log)
        .init()
        .unwrap();

    let client = distance_oracle::api::client(&cli.rpc_url).await;

    let settings = distance_oracle::Settings {
        evaluation_result_dir: cli.evaluation_result_dir.into(),
        rpc_url: cli.rpc_url,
    };

    if let Some(duration) = cli.interval {
        let mut interval = tokio::time::interval(std::time::Duration::from_secs(duration));
        interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay);
        loop {
            distance_oracle::run(&client, &settings).await;
            interval.tick().await;
        }
    } else {
        distance_oracle::run(&client, &settings).await;
    }
}
+128 −0
Original line number Diff line number Diff line
// Copyright 2023 Axiom-Team
//
// This file is part of Duniter-v2S.
//
// Duniter-v2S is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, version 3 of the License.
//
// Duniter-v2S is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with Duniter-v2S. If not, see <https://www.gnu.org/licenses/>.

use crate::runtime::runtime_types::{
    pallet_distance::median::MedianAcc, sp_arithmetic::per_things::Perbill,
};

use dubp_wot::{WebOfTrust, WotId, data::rusty::RustyWebOfTrust};
use std::collections::BTreeSet;

pub struct Client {
    wot: RustyWebOfTrust,
    pub pool_len: usize,
}
pub type AccountId = sp_runtime::AccountId32;
pub type IdtyIndex = u32;
pub type H256 = subxt::utils::H256;

pub struct EvaluationPool {
    pub evaluations: (Vec<(IdtyIndex, MedianAcc<Perbill>)>,),
    pub evaluators: BTreeSet<AccountId>,
}

pub async fn client(_rpc_url: impl AsRef<str>) -> Client {
    unimplemented!()
}

pub fn client_from_wot(wot: RustyWebOfTrust) -> Client {
    Client { wot, pool_len: 1 }
}

pub async fn parent_hash(_client: &Client) -> H256 {
    Default::default()
}

pub async fn current_period_index(_client: &Client, _parent_hash: H256) -> u32 {
    0
}

pub async fn current_pool(
    client: &Client,
    _parent_hash: H256,
    _current_session: u32,
) -> Option<EvaluationPool> {
    Some(EvaluationPool {
        evaluations: (client
            .wot
            .get_enabled()
            .into_iter()
            .chain(client.wot.get_disabled().into_iter())
            .zip(0..client.pool_len)
            .map(|(wot_id, _)| {
                (wot_id.0 as IdtyIndex, unsafe {
                    std::mem::transmute::<
                        (std::vec::Vec<()>, std::option::Option<u32>, i32),
                        MedianAcc<Perbill>,
                    >((Vec::<()>::new(), Option::<u32>::None, 0))
                })
            })
            .collect(),),
        evaluators: BTreeSet::new(),
    })
}

pub async fn evaluation_block(_client: &Client, _parent_hash: H256) -> H256 {
    Default::default()
}

pub async fn max_referee_distance(_client: &Client) -> u32 {
    5
}

pub async fn member_iter(client: &Client, _evaluation_block: H256) -> MemberIter {
    MemberIter(client.wot.get_enabled().into_iter())
}

pub struct MemberIter(std::vec::IntoIter<WotId>);

impl MemberIter {
    pub async fn next(&mut self) -> Result<Option<IdtyIndex>, subxt::error::Error> {
        Ok(self.0.next().map(|wot_id| wot_id.0 as u32))
    }
}

pub async fn cert_iter(client: &Client, _evaluation_block: H256) -> CertIter {
    CertIter(
        client
            .wot
            .get_enabled()
            .iter()
            .chain(client.wot.get_disabled().iter())
            .map(|wot_id| {
                (
                    wot_id.0 as IdtyIndex,
                    client
                        .wot
                        .get_links_source(*wot_id)
                        .unwrap_or_default()
                        .into_iter()
                        .map(|wot_id| (wot_id.0 as IdtyIndex, 0))
                        .collect::<Vec<(IdtyIndex, u32)>>(),
                )
            })
            .collect::<Vec<_>>()
            .into_iter(),
    )
}

pub struct CertIter(std::vec::IntoIter<(IdtyIndex, Vec<(IdtyIndex, u32)>)>);

impl CertIter {
    pub async fn next(&mut self) -> Result<Option<(u32, Vec<(u32, u32)>)>, subxt::error::Error> {
        Ok(self.0.next())
    }
}
+101 −0
Original line number Diff line number Diff line
// Copyright 2023 Axiom-Team
//
// This file is part of Duniter-v2S.
//
// Duniter-v2S is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, version 3 of the License.
//
// Duniter-v2S is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with Duniter-v2S. If not, see <https://www.gnu.org/licenses/>.

use dubp_wot::{
    WebOfTrust, data::rusty::RustyWebOfTrust, operations::distance::DistanceCalculator,
};
use flate2::read::ZlibDecoder;
use sp_runtime::Perbill;
use std::{fs::File, io::Read};

#[tokio::test]
#[ignore = "long to execute"]
async fn test_distance_against_v1() {
    let wot = wot_from_v1_file();
    let n = wot.size();
    let min_certs_for_referee = (wot.get_enabled().len() as f32).powf(1. / 5.).ceil() as u32;

    // Reference implementation
    let ref_calculator = dubp_wot::operations::distance::RustyDistanceCalculator;
    let t_a = std::time::Instant::now();
    let ref_results: Vec<Perbill> = wot
        .get_enabled()
        .into_iter()
        .chain(wot.get_disabled().into_iter())
        .zip(0..n)
        .map(|(i, _)| {
            let result = ref_calculator
                .compute_distance(
                    &wot,
                    dubp_wot::operations::distance::WotDistanceParameters {
                        node: i,
                        sentry_requirement: min_certs_for_referee,
                        step_max: 5,
                        x_percent: 0.8,
                    },
                )
                .unwrap();
            Perbill::from_rational(result.success, result.sentries)
        })
        .collect();
    println!("ref time: {}", t_a.elapsed().as_millis());

    // Our implementation
    let mut client = crate::api::client_from_wot(wot);
    client.pool_len = n;

    let t_a = std::time::Instant::now();
    let results = crate::compute_distance_evaluation(&client, &Default::default())
        .await
        .unwrap();
    println!("new time: {}", t_a.elapsed().as_millis());
    assert_eq!(results.0.len(), n);

    let mut errors: Vec<_> = results
        .0
        .iter()
        .zip(ref_results.iter())
        .map(|(r, r_ref)| r.deconstruct() as i64 - r_ref.deconstruct() as i64)
        .collect();
    errors.sort_unstable();
    println!(
        "Error: {:?} / {:?} / {:?} / {:?} / {:?}  (min / 1Q / med / 3Q / max)",
        errors[0],
        errors[errors.len() / 4],
        errors[errors.len() / 2],
        errors[errors.len() * 3 / 4],
        errors[errors.len() - 1]
    );

    let correct_results = results
        .0
        .iter()
        .zip(ref_results.iter())
        .map(|(r, r_ref)| (r == r_ref) as usize)
        .sum::<usize>();
    println!("Correct results: {correct_results} / {n}");
    assert_eq!(correct_results, n);
}

fn wot_from_v1_file() -> RustyWebOfTrust {
    let file = File::open("wot.deflate").expect("Cannot open wot.deflate");
    let mut decompressor = ZlibDecoder::new(file);
    let mut decompressed_bytes = Vec::new();
    decompressor
        .read_to_end(&mut decompressed_bytes)
        .expect("Cannot decompress wot.deflate");
    bincode::deserialize::<RustyWebOfTrust>(&decompressed_bytes).expect("Cannot decode wot.deflate")
}
+15 −15
Original line number Diff line number Diff line
# This is a minimal docker-compose.yml template for running a Duniter instance
# This is a minimal docker-compose.yml template for running a Duniter mirror node
# For more detailed examples, look at docker/compose folder

version: "3.5"

services:
  duniter-v2s:
    container_name: duniter-v2s
    # choose the version of the image here
    image: duniter/duniter-v2s:latest
  duniter-v2s-mirror:
    container_name: duniter-v2s-mirror
    # the image tells which network you are connecting to
    # here it is gdev network
    image: duniter/duniter-v2s-gdev-800:latest
    ports:
      # telemetry
      # prometheus telemetry to monitor resource use
      - 9615:9615
      # rpc
      - 9933:9933
      # rpc-ws
      # RPC API (ws and http)
      - 9944:9944
      # p2p
      # public p2p endpoint
      - 30333:30333
    environment:
      DUNITER_NODE_NAME: "duniter_local"
      DUNITER_CHAIN_NAME: "gdev"
      # read https://duniter.org/wiki/duniter-v2/configure-docker/
      # to configure these
      DUNITER_NODE_NAME: duniter_local
      DUNITER_CHAIN_NAME: gdev
      DUNITER_PUBLIC_ADDR: /dns/your.domain.name/tcp/30333
      DUNITER_LISTEN_ADDR: /ip4/0.0.0.0/tcp/30333
    volumes:
      - duniter-local-data:/var/lib/duniter

+74 −30
Original line number Diff line number Diff line
# Workaround for https://github.com/containers/buildah/issues/4742
FROM --platform=$BUILDPLATFORM docker.io/library/debian:bullseye-slim AS target

# ------------------------------------------------------------------------------
# Build Stage
# ------------------------------------------------------------------------------

# Building for Debian buster because we need the binary to be compatible
# with the image paritytech/ci-linux:production (currently based on
# debian:buster-slim) used by the gitlab CI
FROM rust:1-buster as build
# When building for a foreign arch, use cross-compilation
# https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1-bullseye AS build
ARG BUILDPLATFORM
ARG TARGETPLATFORM

# Debug
RUN echo "BUILDPLATFORM = $BUILDPLATFORM"
RUN echo "TARGETPLATFORM = $TARGETPLATFORM"

# We need the target arch triplet in both Debian and rust flavor
RUN echo "DEBIAN_ARCH_TRIPLET='$(dpkg-architecture -A${TARGETPLATFORM#linux/} -qDEB_TARGET_MULTIARCH)'" >>/root/dynenv
RUN . /root/dynenv && \
    echo "RUST_ARCH_TRIPLET='$(echo "$DEBIAN_ARCH_TRIPLET" | sed -E 's/-linux-/-unknown&/')'" >>/root/dynenv
RUN cat /root/dynenv

WORKDIR /root

# Copy source tree
COPY . .

RUN test -x build/duniter || \
    ( \
        apt-get update && \
        DEBIAN_FRONTEND=noninteractive apt-get install -y clang cmake protobuf-compiler \
    )
RUN apt-get update && \
    DEBIAN_FRONTEND=noninteractive apt-get install -y clang cmake protobuf-compiler

# build duniter
ARG threads=1
RUN test -x build/duniter || \
    ( \
        CARGO_PROFILE_RELEASE_LTO="true" \
            cargo build --release -j $threads && \
        mkdir -p build && \
        mv target/release/duniter build/ \
    )
ARG debug=0
RUN if [ "$debug" = 0 ]; then \
        echo "CARGO_OPTIONS=--release" >>/root/dynenv && \
        echo "TARGET_FOLDER=release" >>/root/dynenv; \
    else \
        echo "TARGET_FOLDER=debug" >>/root/dynenv; \
    fi

# Create fake duniter-cucumber if is not exist
# The goal is to avoid error later, this binary is optional
RUN test -x build/duniter-cucumber || \
    ( \
# Configure cross-build environment if need be
RUN set -x && \
    if [ "$TARGETPLATFORM" != "$BUILDPLATFORM" ]; then \
        . /root/dynenv && \
        apt install -y gcc-$DEBIAN_ARCH_TRIPLET binutils-$DEBIAN_ARCH_TRIPLET && \
        rustup target add "$RUST_ARCH_TRIPLET" && \
        : https://github.com/rust-lang/cargo/issues/4133 && \
        echo "RUSTFLAGS='-C linker=$DEBIAN_ARCH_TRIPLET-gcc'; export RUSTFLAGS" >>/root/dynenv; \
    fi

# Build
ARG chain="gdev"
RUN set -x && \
    cat /root/dynenv && \
    . /root/dynenv && \
    cargo build -Zgit=shallow-deps --locked $CARGO_OPTIONS --no-default-features $BENCH_OPTIONS --features ${chain},embed --target "$RUST_ARCH_TRIPLET" && \
    cargo build -Zgit=shallow-deps --locked $CARGO_OPTIONS --target "$RUST_ARCH_TRIPLET" --package distance-oracle && \
    mkdir -p build && \
        touch build/duniter-cucumber \
    )
    mv target/$RUST_ARCH_TRIPLET/$TARGET_FOLDER/duniter build/ && \
    mv target/$RUST_ARCH_TRIPLET/$TARGET_FOLDER/distance-oracle build/

# Run tests if requested, except when cross-building
ARG cucumber=0
RUN if [ "$cucumber" != 0 ] && [ "$TARGETPLATFORM" = "$BUILDPLATFORM" ]; then \
        cargo ta && \
        cargo test -Zgit=shallow-deps --workspace --exclude duniter-end2end-tests --exclude duniter-live-tests --features=runtime-benchmarks,constant-fees \
        cd target/debug/deps/ && \
        rm cucumber_tests-*.d && \
        mv cucumber_tests* ../../../build/duniter-cucumber; \
    fi

# ------------------------------------------------------------------------------
# Final Stage
# ------------------------------------------------------------------------------

FROM debian:buster-slim
FROM target

LABEL maintainer="Gilles Filippini <gilles.filippini@pini.fr>"
LABEL version="0.0.0"
LABEL description="Crypto-currency software (based on Substrate framework) to operate Ğ1 libre currency"

# Required certificates for RPC connections
RUN apt-get update \
 && apt-get install -y --no-install-recommends ca-certificates
RUN update-ca-certificates
RUN apt-get clean && rm -rf /var/lib/apt/lists/*

RUN adduser --home /var/lib/duniter duniter

# Configuration
# rpc, rpc-ws, p2p, telemetry
EXPOSE 9933 9944 30333 9615
# rpc, p2p, telemetry
EXPOSE 9944 30333 9615
VOLUME /var/lib/duniter
ENTRYPOINT ["docker-entrypoint"]
USER duniter

# Intall
COPY --from=build /root/build/duniter /usr/local/bin/duniter
COPY --from=build /root/build/duniter-cucumber /usr/local/bin/duniter-cucumber
COPY --from=build /root/build /usr/local/bin/
COPY --from=build /root/dynenv /var/lib/duniter
COPY docker/docker-entrypoint /usr/local/bin/
COPY docker/docker-distance-entrypoint /usr/local/bin/

# Debug
RUN cat /var/lib/duniter/dynenv
+43 −24
Original line number Diff line number Diff line
@@ -4,42 +4,40 @@ Duniter is the software that supports the [Ğ1 libre-currency blockchain](https:

[Duniter v2s](https://git.duniter.org/nodes/rust/duniter-v2s) is a complete rewrite of Duniter based on the Substrate / Polkadot framework. **This is alpha state work in progress.**

# Minimal docker-compose file for an RPC (non validator) node
## Minimal docker-compose file for an mirror node

```
version: "3.5"

services:
  duniter-rpc:
    image: duniter/duniter-v2s:latest
  duniter-mirror:
    image: duniter/duniter-v2s-gdev:latest
    restart: unless-stopped
    ports:
      # Prometheus endpoint
      - 9615:9615
      # rpc via http
      - 9933:9933
      # rpc via websocket
      # rpc
      - 9944:9944
      # p2p
      - 30333:30333
    volumes:
      - data-rpc:/var/lib/duniter/
      - data-mirror:/var/lib/duniter/
    environment:
      - DUNITER_CHAIN_NAME=gdev
      - DUNITER_NODE_NAME=<my-node-name>

volumes:
  data-rpc:
  data-mirror:
```

# Minimal docker-compose file for a validator node
## Minimal docker-compose file for a validator node

```
version: "3.5"

services:
  duniter-validator:
    image: duniter/duniter-v2s:latest
    image: duniter/duniter-v2s-gdev:latest
    restart: unless-stopped
    ports:
      # Prometheus endpoint
@@ -57,26 +55,47 @@ volumes:
  data-validator:
```

# Environment variables
## Environment variables

| Name                         | Description                                                                                                                                                                                                                                                                                                                                          | Default                                                                                     |
| ---- | ----------- | ------- |
| ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- |
| `DUNITER_NODE_NAME`          | The node name. This name will appear on the Substrate telemetry server when telemetry is enabled.                                                                                                                                                                                                                                                    | Random name                                                                                 |
| `DUNITER_CHAIN_NAME`         | The currency to process. "gdev" uses the embeded chainspec. A path allows to use a local json raw chainspec.                                                                                                                                                                                                                                         | `dev` (development mode)                                                                    |
| `DUNITER_PUBLIC_ADDR` | The libp2p public address base. See [libp2p documentation](https://docs.libp2p.io/concepts/fundamentals/addressing/). This variable is useful when the node is behind a reverse-proxy with its ports not directly exposed.<br>Note: the `p2p/<peer_id>` part of the address shouldn't be set in this variable. It is automatically added by Duniter. | duniter-v2s guesses one from the node's IPv4 address. |
| `DUNITER_PUBLIC_ADDR`        | The libp2p public address base. See [libp2p documentation](https://docs.libp2p.io/concepts/fundamentals/addressing/). This variable is useful when the node is behind a reverse proxy with its ports not directly exposed.<br>Note: the `p2p/<peer_id>` part of the address shouldn't be set in this variable. It is automatically added by Duniter. | duniter-v2s guesses one from the node's IPv4 address.                                       |
| `DUNITER_LISTEN_ADDR`        | The libp2p listen address. See [libp2p documentation](https://docs.libp2p.io/concepts/fundamentals/addressing/). This variable is useful when running a validator node behind a reverse proxy, to force the P2P end point in websocket mode with:<br> `DUNITER_LISTEN_ADDR=/ip4/0.0.0.0/tcp/30333/ws`                                                | Non validator node: `/ip4/0.0.0.0/tcp/30333/ws`<br>Validator node: `/ip4/0.0.0.0/tcp/30333` |
| `DUNITER_RPC_CORS`           | Value of the polkadot `--rpc-cors` option.                                                                                                                                                                                                                                                                                                           | `all`                                                                                       |
| `DUNITER_VALIDATOR`          | Boolean (`true` / `false`) to run the node in validator mode. Configure the polkadot options `--validator --rpc-methods Unsafe`.                                                                                                                                                                                                                     | `false`                                                                                     |
| `DUNITER_DISABLE_PROMETHEUS` | Boolean to disable the Prometheus endpoint on port 9615.                                                                                                                                                                                                                                                                                             | `false`                                                                                     |
| `DUNITER_DISABLE_TELEMETRY` | Boolean to disable connecting to the Substrate tememetry server. | `false` |
| `DUNITER_PRUNING_PROFILE` | * `default`<br> * `archive`: keep all blocks and state blocks<br> * `light`: keep only last 256 state blocks and last 14400 blocks (one day duration) | `default` |
| `DUNITER_DISABLE_TELEMETRY`  | Boolean to disable connecting to the Substrate telemetry server.                                                                                                                                                                                                                                                                                     | `false`                                                                                     |
| `DUNITER_PRUNING_PROFILE`    | _ `default`<br> _ `archive`: keep all blocks and state blocks<br> \* `light`: keep only last 256 state blocks and last 14400 blocks (one day duration)                                                                                                                                                                                               | `default`                                                                                   |
| `DUNITER_PUBLIC_RPC`         | The public RPC endpoint to gossip on the network and make available in the apps. | None |
| `DUNITER_PUBLIC_SQUID`       | The public Squid graphql endpoint to gossip on the network and make available in the apps. | None |
| `DUNITER_PUBLIC_ENDPOINTS`   | Path to a JSON file containing public endpoints to gossip on the network. The file should use the following format:<br>```{"endpoints": [  { "protocol": "rpc", "address": "wss://gdev.example.com" },  { "protocol": "squid", "address": "gdev.example.com/graphql/v1" }]}``` | None |

# Other duniter options
## Other Duniter options

You can pass any other option to Duniter using the `command` docker-compose element:

You can pass any other option to duniter using the `command` docker-compose element:
```
    command:
      # workaround for substrate issue #12073
      # https://github.com/paritytech/substrate/issues/12073
      - "--wasm-execution=interpreted-i-know-what-i-do"
```

## Start Duniter

Once you are happy with your `docker-compose.yml` file, run in the same folder:

```bash
docker compose up -d
```

## Running duniter subcommands or custom set of options

To run duniter from the command line without the default configuration detailed in the "Environment variables" section use `--` as the first argument. For example:

```
$ docker run --rm duniter/duniter-v2s-gdev:latest -- key generate
$ docker run --rm duniter/duniter-v2s-gdev:latest -- --chain gdev ...
```
+37 −0
Original line number Diff line number Diff line
FROM paritytech/ci-linux:production

# Set the working directory
WORKDIR /app/

# Copy the toolchain
COPY rust-toolchain.toml ./

# Install toolchain, substrate and cargo-deb with cargo cache
RUN --mount=type=cache,target=/root/.cargo \
    cargo install cargo-deb

# Create a dummy project to cache dependencies
COPY Cargo.toml .
COPY rust-toolchain.toml ./
RUN --mount=type=cache,target=/app/target \
    --mount=type=cache,target=/root/.cargo/registry \
    mkdir src && \
    sed -i '/git = \|version = /!d' Cargo.toml && \
    sed -i 's/false/true/' Cargo.toml && \
    sed -i '1s/^/\[package\]\nname\=\"Dummy\"\n\[dependencies\]\n/' Cargo.toml && \
    echo "fn main() {}" > src/main.rs && \
    cargo build -Zgit=shallow-deps --release && \
    rm -rf src Cargo.lock Cargo.toml

# Copy the entire project
COPY . .

# Build the project and create Debian packages
RUN --mount=type=cache,target=/app/target \
    --mount=type=cache,target=/root/.cargo/registry \
    cargo build -Zgit=shallow-deps --release && \
    cargo deb --no-build -p duniter && \
    cp -r ./target/debian/ ./

# Clean up unnecessary files to reduce image size
RUN rm -rf /app/target/release /root/.cargo/registry
+37 −0
Original line number Diff line number Diff line
# docker-compose.yml template for running a Duniter smith node
# for more doc, see https://duniter.org/wiki/duniter-v2/
services:
  # duniter smith node
  duniter-v2s-smith:
    container_name: duniter-v2s-smith
    image: duniter/duniter-v2s-gdev-800:latest
    ports:
      # RPC API of a smith node should not be exposed publicly!
      - 127.0.0.1:9944:9944
      # public p2p endpoint
      - 30333:30333
    environment:
      DUNITER_NODE_NAME: duniter_smith
      DUNITER_CHAIN_NAME: gdev
      DUNITER_VALIDATOR: true
      DUNITER_PRUNING_PROFILE: light
      DUNITER_PUBLIC_ADDR: /dns/your.domain.name/tcp/30333
      DUNITER_LISTEN_ADDR: /ip4/0.0.0.0/tcp/30333
    volumes:
      - duniter-smith-data:/var/lib/duniter
  # distance oracle
  distance-oracle:
    container_name: distance-oracle
    # choose the version of the image here
    image: duniter/duniter-v2s-gdev:latest
    entrypoint: docker-distance-entrypoint
    environment:
      ORACLE_RPC_URL: ws://duniter-v2s-smith:9944
      ORACLE_RESULT_DIR: /var/lib/duniter/chains/gdev/distance/
      ORACLE_EXECUTION_INTERVAL: 1800
      ORACLE_LOG_LEVEL: info
    volumes:
      - duniter-smith-data:/var/lib/duniter

volumes:
  duniter-smith-data:
+0 −27
Original line number Diff line number Diff line
# This is a docker template for running a gdev mirror

version: "3.5"

services:
  duniter-rpc:
    image: duniter/duniter-v2s:latest
    restart: unless-stopped
    ports:
      # telemetry
      - 127.0.0.1:9615:9615
      # rpc
      - 127.0.0.1:9933:9933
      # rpc-ws
      - 127.0.0.1:9944:9944
      # p2p
      - 30333:30333
    volumes:
      - ./node.key:/etc/duniter/node.key
      - duniter-rpc-data:/var/lib/duniter/
    environment:
      - DUNITER_CHAIN_NAME=gdev
      # SERVER_DOMAIN should be replaced by a domain name that point on your server
      - DUNITER_PUBLIC_ADDR=/dns/${SERVER_DOMAIN?SERVER_DOMAIN should be set}/tcp/30333/ws

volumes:
  duniter-rpc-data:
+0 −45
Original line number Diff line number Diff line
version: "3.5"

services:
  duniter-rpc:
    image: duniter/duniter-v2s:latest
    restart: unless-stopped
    ports:
      # telemetry
      - 127.0.0.1:9615:9615
      # rpc
      - 127.0.0.1:9933:9933
      # rpc-ws
      - 127.0.0.1:9944:9944
      # p2p
      - 30333:30333
    volumes:
      - ./node.key:/etc/duniter/validator-node.key
      - duniter-rpc-data:/var/lib/duniter/
    environment:
      - DUNITER_CHAIN_NAME=gdev
      # RPC_SERVER_DOMAIN should be replaced by a domain name that point on your server
      - DUNITER_PUBLIC_ADDR=/dns/${RPC_SERVER_DOMAIN?RPC_SERVER_DOMAIN should be set}/tcp/30333/ws

  duniter-validator:
    image: duniter/duniter-v2s:latest
    restart: unless-stopped
    ports:
      # telemetry
      - 127.0.0.1:9616:9615
      # rpc
      - 127.0.0.1:9934:9933
      # rpc-ws
      - 127.0.0.1:9945:9944
      # p2p
      - 30334:30333
    volumes:
      - ./node.key:/etc/duniter/validator-node.key
      - duniter-validator-data:/var/lib/duniter/
    environment:
      - DUNITER_CHAIN_NAME=gdev
      # VALIDATOR_SERVER_DOMAIN should be replaced by a domain name that point on your server
      - DUNITER_PUBLIC_ADDR=/dns/${VALIDATOR_SERVER_DOMAIN?VALIDATOR_SERVER_DOMAIN should be set}/tcp/30333
      - DUNITER_VALIDATOR=true
    command:
      - "--pruning=14400"
+0 −31
Original line number Diff line number Diff line
version: "3.5"

services:
  duniter-rpc:
    image: duniter/duniter-v2s:DUNITER_IMAGE_TAG
    restart: unless-stopped
    ports:
      - "9944:9944"
      - "30333:30333"
    volumes:
      - ./duniter-rpc/:/var/lib/duniter/
    environment:
      - DUNITER_CHAIN_NAME=/var/lib/duniter/CURRENCY-raw.json
    command:
      - "--bootnodes"
      - "/dns/duniter-validator/tcp/30333/p2p/VALIDATOR_NODE_KEY"

  duniter-validator:
    image: duniter/duniter-v2s:DUNITER_IMAGE_TAG
    restart: unless-stopped
    ports:
      - "127.0.0.1:9945:9944"
      - "30334:30333"
    volumes:
      - ./duniter-validator/:/var/lib/duniter/
    environment:
      - DUNITER_CHAIN_NAME=/var/lib/duniter/CURRENCY-raw.json
      - DUNITER_VALIDATOR=true
    command:
      - "--bootnodes"
      - "/dns/duniter-rpc/tcp/30333/p2p/RPC_NODE_KEY"
+20 −0
Original line number Diff line number Diff line
#!/bin/bash

# Custom startup if a first argument is present and is equal to '--'
# then we just run duniter with the provided arguments (but the '--')
# without applying all the automated configuration below
if [ "$1" = -- ]; then
  shift
  distance-oracle "$@"
else
  ORACLE_RESULT_DIR="${ORACLE_RESULT_DIR:-/distance}"
  ORACLE_EXECUTION_INTERVAL="${ORACLE_EXECUTION_INTERVAL:-1800}"
  ORACLE_RPC_URL="${ORACLE_RPC_URL:-ws://127.0.0.1:9944}"
  ORACLE_LOG_LEVEL="${ORACLE_LOG_LEVEL:-info}"

  while [ true ]; do
    distance-oracle --evaluation-result-dir "$ORACLE_RESULT_DIR" --rpc-url "$ORACLE_RPC_URL" --log "$ORACLE_LOG_LEVEL"
    echo "Waiting $ORACLE_EXECUTION_INTERVAL seconds before next execution..."
    sleep $ORACLE_EXECUTION_INTERVAL
  done
fi
Original line number Diff line number Diff line
#!/bin/bash

# Custom startup if a first argument is present and is equal to '--'
# then we just run duniter with the provided arguments (but the '--')
# without applying all the automated configuration below
if [ "$1" = -- ]; then
  shift
  exec duniter "$@"
fi

# Normal startup
function boolean () {
  echo "$1" | sed -E 's/^(true|yes|1)$/true/i'
}
@@ -12,27 +21,60 @@ function ternary () {
  fi
}

# Define chain name at the beginning
# with #274 we could have default given in network branch
DUNITER_CHAIN_NAME="${DUNITER_CHAIN_NAME:-dev}"
case "$DUNITER_CHAIN_NAME" in
  dev)
    chain=(--dev)
    ;;
  *)
    chain=(--chain "$DUNITER_CHAIN_NAME")
    ;;
esac

# Node name will appear on network
DUNITER_NODE_NAME="${DUNITER_NODE_NAME:-$DUNITER_INSTANCE_NAME}"
if [ -n "$DUNITER_NODE_NAME" ]; then
  set -- "$@" --name "$DUNITER_NODE_NAME"
fi

# Path of key file. Should be generated below if not present before starting Duniter
_DUNITER_KEY_FILE=/var/lib/duniter/node.key
set -- "$@" --node-key-file "$_DUNITER_KEY_FILE"

# Generate node.key if not existing (chain name is required)
if [ ! -f "$_DUNITER_KEY_FILE" ]; then
  echo "Generating node key file '$_DUNITER_KEY_FILE'..."
  duniter key generate-node-key --file "$_DUNITER_KEY_FILE"
  duniter key generate-node-key --file "$_DUNITER_KEY_FILE" "${chain[@]}"
else
  echo "Node key file '$_DUNITER_KEY_FILE' exists."
fi
# Log peer ID
_DUNITER_PEER_ID="$(duniter key inspect-node-key --file "$_DUNITER_KEY_FILE")"
echo "Node peer ID is '$_DUNITER_PEER_ID'."

# Define public address (with dns, correct port and protocol for instance)
if [ -n "$DUNITER_PUBLIC_ADDR" ]; then
  set -- "$@" --public-addr "$DUNITER_PUBLIC_ADDR"
fi

# Define public RPC endpoint (gossiped on the network)
if [ -n "$DUNITER_PUBLIC_RPC" ]; then
  set -- "$@" --public-rpc "$DUNITER_PUBLIC_RPC"
fi

# Define public Squid endpoint (gossiped on the network)  
if [ -n "$DUNITER_PUBLIC_SQUID" ]; then
  set -- "$@" --public-squid "$DUNITER_PUBLIC_SQUID"
fi

# Define public endpoints from JSON file (gossiped on the network)
if [ -n "$DUNITER_PUBLIC_ENDPOINTS" ]; then
  set -- "$@" --public-endpoints "$DUNITER_PUBLIC_ENDPOINTS"
fi

# Define listen address (inside docker)
if [ -n "$DUNITER_LISTEN_ADDR" ]; then
  set -- "$@" --listen-addr "$DUNITER_LISTEN_ADDR"
fi
@@ -40,6 +82,7 @@ fi
DUNITER_RPC_CORS="${DUNITER_RPC_CORS:-all}"
set -- "$@" --rpc-cors "$DUNITER_RPC_CORS"

# In case of validator, unsafe rpc methods are needed (like rotate_key) and should not be exposed publicly
DUNITER_VALIDATOR=$(boolean "${DUNITER_VALIDATOR:-false}")
if [ "$DUNITER_VALIDATOR" = true ]; then
  set -- "$@" --rpc-methods Unsafe --validator
@@ -55,6 +98,7 @@ if [ "$DUNITER_DISABLE_TELEMETRY" = true ]; then
  set -- "$@" --no-telemetry
fi

# Set pruning profile
DUNITER_PRUNING_PROFILE="${DUNITER_PRUNING_PROFILE:-default}"
case "$DUNITER_PRUNING_PROFILE" in
  default)
@@ -70,19 +114,12 @@ case "$DUNITER_PRUNING_PROFILE" in
    ;;
esac

DUNITER_CHAIN_NAME="${DUNITER_CHAIN_NAME:-dev}"
case "$DUNITER_CHAIN_NAME" in
  dev)
    chain=(--dev)
    ;;
  *)
    chain=(--chain "$DUNITER_CHAIN_NAME")
    ;;
esac

# Set main command
# Since we are inside docker, we can bind to all interfaces.
# User will bind port to host interface or set reverse proxy when needed.
set -- "$@" \
  "${chain[@]}" \
  -d /var/lib/duniter --unsafe-rpc-external --unsafe-ws-external
  -d /var/lib/duniter --unsafe-rpc-external

echo "Starting duniter with parameters:" "$@"
exec duniter "$@"
Original line number Diff line number Diff line
@@ -3,7 +3,7 @@
This functional documentation presents how wallets can interact with the blockchain.
It is intended to complete the [runtime calls documentation](./runtime-calls.md) in a runtime-specific way to fit the real needs of wallet developers.

Only ĞDev is covered for now.
NOTE : a more detailed doc is available at <https://duniter.org/wiki/duniter-v2/doc/>

## Notations

@@ -11,15 +11,29 @@ Only ĞDev is covered for now.

## Account existence

An account exists if and only if it contains at least the existential deposit (2 ĞD).
An account exists if and only if it contains at least the existential deposit (`balances.existentialDeposit` = 1 ĞD).

## Become member

Only use `identity` pallet. The `membership` calls are disabled.
Only use `identity` pallet.

1. The account that wants to gain membership needs to exists.
1. Any account that already has membership and respects the identity creation period can create an identity for another account, using `identity.createIdentity`.
1. The account has to confirm its identity with a name, using `identity.confirmIdentity`. The name must be ASCII alphanumeric, punctuation or space characters: `` /^[-!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~a-zA-Z0-9 ]{3,64}$/ `` (additionally, trailing spaces and double spaces are forbidden, as a phishing countermeasure). If the name is already used, the call will fail.
1. 4 different member accounts must certify the account using `cert.addCert`.
1. The distance evaluation must be requested for the pending identity using `distance.requestDistanceEvaluation`.
1. 3 distance sessions later, if the distance rule is respected, identity is validated automatically.

## Change key

A member can request a key change via the `identity.change_onwner_key` call. It needs the following SCALE encoded (see SCALE encoding section below) payload:

- The new owner key payload prefix (rust definition: `b"icok"`)
- the genesis block hash. (rust type `[u8; 32]` (`H256`))
- The identity index (rust type `u64`)
- The old key (rust type `u64`)

This payload must be signed with the new key.

## Revoke an identity

@@ -29,9 +43,20 @@ This feature is useful in case the user has lost their private key since the rev

### Generate the revocation payload

1. Scale-encode the revocation payload, that is the concatenation of the 32-bits public key and the genesis block hash.
2. Store this payload and its signature.
The revocation needs this SCALE encoded (see SCALE encoding section below) payload:

- The revocation payload prefix (rust definition: `b"revo"`)
- The identity index (rust type `u64`)
- the genesis block hash. (rust type `[u8; 32]` (`H256`))

This payload must be signed with the corresponding revocation key.

### Effectively revoke the identity

1. From any origin that can pay the fee, use `identity.revokeIdentity` with the revocation payload.

## SCALE encoding

SCALE codec documentation: https://docs.substrate.io/reference/scale-codec/.

At the end of this documentation you'll find links to SCALE codec implementation for other languages.
+394 −0
Original line number Diff line number Diff line
msgid "System.InvalidSpecName"
msgstr "The name of specification does not match between the current runtime
and the new runtime."
msgid "System.SpecVersionNeedsToIncrease"
msgstr "The specification version is not allowed to decrease between the current runtime
and the new runtime."
msgid "System.FailedToExtractRuntimeVersion"
msgstr "Failed to extract the runtime version from the new runtime.

Either calling `Core_version` or decoding `RuntimeVersion` failed."
msgid "System.NonDefaultComposite"
msgstr "Suicide called when the account has non-default composite data."
msgid "System.NonZeroRefCount"
msgstr "There is a non-zero reference count preventing the account from being purged."
msgid "System.CallFiltered"
msgstr "The origin filter prevent the call to be dispatched."
msgid "System.MultiBlockMigrationsOngoing"
msgstr "A multi-block migration is ongoing and prevents the current code from being replaced."
msgid "System.NothingAuthorized"
msgstr "No upgrade authorized."
msgid "System.Unauthorized"
msgstr "The submitted code is not authorized."
msgid "Scheduler.FailedToSchedule"
msgstr "Failed to schedule a call"
msgid "Scheduler.NotFound"
msgstr "Cannot find the scheduled call."
msgid "Scheduler.TargetBlockNumberInPast"
msgstr "Given target block number is in the past."
msgid "Scheduler.RescheduleNoChange"
msgstr "Reschedule failed because it does not change scheduled time."
msgid "Scheduler.Named"
msgstr "Attempt to use a non-named function on a named task."
msgid "Babe.InvalidEquivocationProof"
msgstr "An equivocation proof provided as part of an equivocation report is invalid."
msgid "Babe.InvalidKeyOwnershipProof"
msgstr "A key ownership proof provided as part of an equivocation report is invalid."
msgid "Babe.DuplicateOffenceReport"
msgstr "A given equivocation report is valid but already previously reported."
msgid "Babe.InvalidConfiguration"
msgstr "Submitted configuration is invalid."
msgid "Balances.VestingBalance"
msgstr "Vesting balance too high to send value."
msgid "Balances.LiquidityRestrictions"
msgstr "Account liquidity restrictions prevent withdrawal."
msgid "Balances.InsufficientBalance"
msgstr "Balance too low to send value."
msgid "Balances.ExistentialDeposit"
msgstr "Value too low to create account due to existential deposit."
msgid "Balances.Expendability"
msgstr "Transfer/payment would kill account."
msgid "Balances.ExistingVestingSchedule"
msgstr "A vesting schedule already exists for this account."
msgid "Balances.DeadAccount"
msgstr "Beneficiary account must pre-exist."
msgid "Balances.TooManyReserves"
msgstr "Number of named reserves exceed `MaxReserves`."
msgid "Balances.TooManyHolds"
msgstr "Number of holds exceed `VariantCountOf<T::RuntimeHoldReason>`."
msgid "Balances.TooManyFreezes"
msgstr "Number of freezes exceed `MaxFreezes`."
msgid "Balances.IssuanceDeactivated"
msgstr "The issuance cannot be modified since it is already deactivated."
msgid "Balances.DeltaZero"
msgstr "The delta cannot be zero."
msgid "OneshotAccount.BlockHeightInFuture"
msgstr "Block height is in the future."
msgid "OneshotAccount.BlockHeightTooOld"
msgstr "Block height is too old."
msgid "OneshotAccount.DestAccountNotExist"
msgstr "Destination account does not exist."
msgid "OneshotAccount.ExistentialDeposit"
msgstr "Destination account has a balance less than the existential deposit."
msgid "OneshotAccount.InsufficientBalance"
msgstr "Source account has insufficient balance."
msgid "OneshotAccount.OneshotAccountAlreadyCreated"
msgstr "Destination oneshot account already exists."
msgid "OneshotAccount.OneshotAccountNotExist"
msgstr "Source oneshot account does not exist."
msgid "SmithMembers.OriginMustHaveAnIdentity"
msgstr "Issuer of anything (invitation, acceptance, certification) must have an identity ID"
msgid "SmithMembers.OriginHasNeverBeenInvited"
msgstr "Issuer must be known as a potential smith"
msgid "SmithMembers.InvitationIsASmithPrivilege"
msgstr "Invitation is reseverd to smiths"
msgid "SmithMembers.InvitationIsAOnlineSmithPrivilege"
msgstr "Invitation is reseverd to online smiths"
msgid "SmithMembers.InvitationAlreadyAccepted"
msgstr "Invitation must not have been accepted yet"
msgid "SmithMembers.InvitationOfExistingNonExcluded"
msgstr "Invitation of an already known smith is forbidden except if it has been excluded"
msgid "SmithMembers.InvitationOfNonMember"
msgstr "Invitation of a non-member (of the WoT) is forbidden"
msgid "SmithMembers.CertificationMustBeAgreed"
msgstr "Certification cannot be made on someone who has not accepted an invitation"
msgid "SmithMembers.CertificationOnExcludedIsForbidden"
msgstr "Certification cannot be made on excluded"
msgid "SmithMembers.CertificationIsASmithPrivilege"
msgstr "Issuer must be a smith"
msgid "SmithMembers.CertificationIsAOnlineSmithPrivilege"
msgstr "Only online smiths can certify"
msgid "SmithMembers.CertificationOfSelfIsForbidden"
msgstr "Smith cannot certify itself"
msgid "SmithMembers.CertificationReceiverMustHaveBeenInvited"
msgstr "Receiver must be invited by another smith"
msgid "SmithMembers.CertificationAlreadyExists"
msgstr "Receiver must not already have this certification"
msgid "SmithMembers.CertificationStockFullyConsumed"
msgstr "A smith has a limited stock of certifications"
msgid "AuthorityMembers.AlreadyIncoming"
msgstr "Member already incoming."
msgid "AuthorityMembers.AlreadyOnline"
msgstr "Member already online."
msgid "AuthorityMembers.AlreadyOutgoing"
msgstr "Member already outgoing."
msgid "AuthorityMembers.MemberIdNotFound"
msgstr "Owner key is invalid as a member."
msgid "AuthorityMembers.MemberBlacklisted"
msgstr "Member is blacklisted."
msgid "AuthorityMembers.MemberNotBlacklisted"
msgstr "Member is not blacklisted."
msgid "AuthorityMembers.MemberNotFound"
msgstr "Member not found."
msgid "AuthorityMembers.NotOnlineNorIncoming"
msgstr "Neither online nor scheduled."
msgid "AuthorityMembers.NotMember"
msgstr "Not member."
msgid "AuthorityMembers.SessionKeysNotProvided"
msgstr "Session keys not provided."
msgid "AuthorityMembers.TooManyAuthorities"
msgstr "Too many authorities."
msgid "Session.InvalidProof"
msgstr "Invalid ownership proof."
msgid "Session.NoAssociatedValidatorId"
msgstr "No associated validator ID for account."
msgid "Session.DuplicatedKey"
msgstr "Registered duplicate key."
msgid "Session.NoKeys"
msgstr "No keys are associated with this account."
msgid "Session.NoAccount"
msgstr "Key setting account is not live, so it's impossible to associate keys."
msgid "Grandpa.PauseFailed"
msgstr "Attempt to signal GRANDPA pause when the authority set isn't live
(either paused or already pending pause)."
msgid "Grandpa.ResumeFailed"
msgstr "Attempt to signal GRANDPA resume when the authority set isn't paused
(either live or already pending resume)."
msgid "Grandpa.ChangePending"
msgstr "Attempt to signal GRANDPA change with one already pending."
msgid "Grandpa.TooSoon"
msgstr "Cannot signal forced change so soon after last."
msgid "Grandpa.InvalidKeyOwnershipProof"
msgstr "A key ownership proof provided as part of an equivocation report is invalid."
msgid "Grandpa.InvalidEquivocationProof"
msgstr "An equivocation proof provided as part of an equivocation report is invalid."
msgid "Grandpa.DuplicateOffenceReport"
msgstr "A given equivocation report is valid but already previously reported."
msgid "ImOnline.InvalidKey"
msgstr "Non existent public key."
msgid "ImOnline.DuplicatedHeartbeat"
msgstr "Duplicated heartbeat."
msgid "Sudo.RequireSudo"
msgstr "Sender must be the Sudo account."
msgid "Preimage.TooBig"
msgstr "Preimage is too large to store on-chain."
msgid "Preimage.AlreadyNoted"
msgstr "Preimage has already been noted on-chain."
msgid "Preimage.NotAuthorized"
msgstr "The user is not authorized to perform this action."
msgid "Preimage.NotNoted"
msgstr "The preimage cannot be removed since it has not yet been noted."
msgid "Preimage.Requested"
msgstr "A preimage may not be removed when there are outstanding requests."
msgid "Preimage.NotRequested"
msgstr "The preimage request cannot be removed since no outstanding requests exist."
msgid "Preimage.TooMany"
msgstr "More than `MAX_HASH_UPGRADE_BULK_COUNT` hashes were requested to be upgraded at once."
msgid "Preimage.TooFew"
msgstr "Too few hashes were requested to be upgraded (i.e. zero)."
msgid "TechnicalCommittee.NotMember"
msgstr "Account is not a member"
msgid "TechnicalCommittee.DuplicateProposal"
msgstr "Duplicate proposals not allowed"
msgid "TechnicalCommittee.ProposalMissing"
msgstr "Proposal must exist"
msgid "TechnicalCommittee.WrongIndex"
msgstr "Mismatched index"
msgid "TechnicalCommittee.DuplicateVote"
msgstr "Duplicate vote ignored"
msgid "TechnicalCommittee.AlreadyInitialized"
msgstr "Members are already initialized!"
msgid "TechnicalCommittee.TooEarly"
msgstr "The close call was made too early, before the end of the voting."
msgid "TechnicalCommittee.TooManyProposals"
msgstr "There can only be a maximum of `MaxProposals` active proposals."
msgid "TechnicalCommittee.WrongProposalWeight"
msgstr "The given weight bound for the proposal was too low."
msgid "TechnicalCommittee.WrongProposalLength"
msgstr "The given length bound for the proposal was too low."
msgid "TechnicalCommittee.PrimeAccountNotMember"
msgstr "Prime account is not a member"
msgid "TechnicalCommittee.ProposalActive"
msgstr "Proposal is still active."
msgid "UniversalDividend.AccountNotAllowedToClaimUds"
msgstr "This account is not allowed to claim UDs."
msgid "Wot.NotEnoughCerts"
msgstr "Insufficient certifications received."
msgid "Wot.TargetStatusInvalid"
msgstr "Target status is incompatible with this operation."
msgid "Wot.IdtyCreationPeriodNotRespected"
msgstr "Identity creation period not respected."
msgid "Wot.NotEnoughReceivedCertsToCreateIdty"
msgstr "Insufficient received certifications to create identity."
msgid "Wot.MaxEmittedCertsReached"
msgstr "Maximum number of emitted certifications reached."
msgid "Wot.IssuerNotMember"
msgstr "Issuer cannot emit a certification because it is not member."
msgid "Wot.IdtyNotFound"
msgstr "Issuer or receiver not found."
msgid "Wot.MembershipRenewalPeriodNotRespected"
msgstr "Membership can only be renewed after an antispam delay."
msgid "Identity.IdtyAlreadyConfirmed"
msgstr "Identity already confirmed."
msgid "Identity.IdtyAlreadyCreated"
msgstr "Identity already created."
msgid "Identity.IdtyIndexNotFound"
msgstr "Identity index not found."
msgid "Identity.IdtyNameAlreadyExist"
msgstr "Identity name already exists."
msgid "Identity.IdtyNameInvalid"
msgstr "Invalid identity name."
msgid "Identity.IdtyNotFound"
msgstr "Identity not found."
msgid "Identity.InvalidSignature"
msgstr "Invalid payload signature."
msgid "Identity.OwnerKeyUsedAsValidator"
msgstr "Key used as validator."
msgid "Identity.OwnerKeyInBound"
msgstr "Key in bound period."
msgid "Identity.InvalidRevocationKey"
msgstr "Invalid revocation key."
msgid "Identity.IssuerNotMember"
msgstr "Issuer is not member and can not perform this action."
msgid "Identity.NotRespectIdtyCreationPeriod"
msgstr "Identity creation period is not respected."
msgid "Identity.OwnerKeyAlreadyRecentlyChanged"
msgstr "Owner key already changed recently."
msgid "Identity.OwnerKeyAlreadyUsed"
msgstr "Owner key already used."
msgid "Identity.ProhibitedToRevertToAnOldKey"
msgstr "Reverting to an old key is prohibited."
msgid "Identity.AlreadyRevoked"
msgstr "Already revoked."
msgid "Identity.CanNotRevokeUnconfirmed"
msgstr "Can not revoke identity that never was member."
msgid "Identity.CanNotRevokeUnvalidated"
msgstr "Can not revoke identity that never was member."
msgid "Identity.AccountNotExist"
msgstr "Cannot link to an inexisting account."
msgid "Identity.InsufficientBalance"
msgstr "Insufficient balance to create an identity."
msgid "Identity.InvalidLegacyRevocationFormat"
msgstr "Legacy revocation document format is invalid"
msgid "Membership.MembershipNotFound"
msgstr "Membership not found, can not renew."
msgid "Membership.AlreadyMember"
msgstr "Already member, can not add membership."
msgid "Certification.OriginMustHaveAnIdentity"
msgstr "Issuer of a certification must have an identity"
msgid "Certification.CannotCertifySelf"
msgstr "Identity cannot certify itself."
msgid "Certification.IssuedTooManyCert"
msgstr "Identity has already issued the maximum number of certifications."
msgid "Certification.NotEnoughCertReceived"
msgstr "Insufficient certifications received."
msgid "Certification.NotRespectCertPeriod"
msgstr "Identity has issued a certification too recently."
msgid "Certification.CertAlreadyExists"
msgstr "Can not add an already-existing cert"
msgid "Certification.CertDoesNotExist"
msgstr "Can not renew a non-existing cert"
msgid "Distance.AlreadyInEvaluation"
msgstr "Distance is already under evaluation."
msgid "Distance.TooManyEvaluationsByAuthor"
msgstr "Too many evaluations requested by author."
msgid "Distance.TooManyEvaluationsInBlock"
msgstr "Too many evaluations for this block."
msgid "Distance.NoAuthor"
msgstr "No author for this block."
msgid "Distance.CallerHasNoIdentity"
msgstr "Caller has no identity."
msgid "Distance.CallerIdentityNotFound"
msgstr "Caller identity not found."
msgid "Distance.CallerNotMember"
msgstr "Caller not member."
msgid "Distance.CallerStatusInvalid"
msgstr ""
msgid "Distance.TargetIdentityNotFound"
msgstr "Target identity not found."
msgid "Distance.QueueFull"
msgstr "Evaluation queue is full."
msgid "Distance.TooManyEvaluators"
msgstr "Too many evaluators in the current evaluation pool."
msgid "Distance.WrongResultLength"
msgstr "Evaluation result has a wrong length."
msgid "Distance.TargetMustBeUnvalidated"
msgstr "Targeted distance evaluation request is only possible for an unvalidated identity."
msgid "AtomicSwap.AlreadyExist"
msgstr "Swap already exists."
msgid "AtomicSwap.InvalidProof"
msgstr "Swap proof is invalid."
msgid "AtomicSwap.ProofTooLarge"
msgstr "Proof is too large."
msgid "AtomicSwap.SourceMismatch"
msgstr "Source does not match."
msgid "AtomicSwap.AlreadyClaimed"
msgstr "Swap has already been claimed."
msgid "AtomicSwap.NotExist"
msgstr "Swap does not exist."
msgid "AtomicSwap.ClaimActionMismatch"
msgstr "Claim action mismatch."
msgid "AtomicSwap.DurationNotPassed"
msgstr "Duration has not yet passed for the swap to be cancelled."
msgid "Multisig.MinimumThreshold"
msgstr "Threshold must be 2 or greater."
msgid "Multisig.AlreadyApproved"
msgstr "Call is already approved by this signatory."
msgid "Multisig.NoApprovalsNeeded"
msgstr "Call doesn't need any (more) approvals."
msgid "Multisig.TooFewSignatories"
msgstr "There are too few signatories in the list."
msgid "Multisig.TooManySignatories"
msgstr "There are too many signatories in the list."
msgid "Multisig.SignatoriesOutOfOrder"
msgstr "The signatories were provided out of order; they should be ordered."
msgid "Multisig.SenderInSignatories"
msgstr "The sender was contained in the other signatories; it shouldn't be."
msgid "Multisig.NotFound"
msgstr "Multisig operation not found in storage."
msgid "Multisig.NotOwner"
msgstr "Only the account that originally created the multisig is able to cancel it or update
its deposits."
msgid "Multisig.NoTimepoint"
msgstr "No timepoint was given, yet the multisig operation is already underway."
msgid "Multisig.WrongTimepoint"
msgstr "A different timepoint was given to the multisig operation that is underway."
msgid "Multisig.UnexpectedTimepoint"
msgstr "A timepoint was given, yet no multisig operation is underway."
msgid "Multisig.MaxWeightTooLow"
msgstr "The maximum weight information provided was too low."
msgid "Multisig.AlreadyStored"
msgstr "The data to be stored is already stored."
msgid "ProvideRandomness.QueueFull"
msgstr "Request randomness queue is full."
msgid "Proxy.TooMany"
msgstr "There are too many proxies registered or too many announcements pending."
msgid "Proxy.NotFound"
msgstr "Proxy registration not found."
msgid "Proxy.NotProxy"
msgstr "Sender is not a proxy of the account to be proxied."
msgid "Proxy.Unproxyable"
msgstr "A call which is incompatible with the proxy type's filter was attempted."
msgid "Proxy.Duplicate"
msgstr "Account is already a proxy."
msgid "Proxy.NoPermission"
msgstr "Call may not be made by proxy because it may escalate its privileges."
msgid "Proxy.Unannounced"
msgstr "Announcement, if made at all, was made too recently."
msgid "Proxy.NoSelfProxy"
msgstr "Cannot add self as proxy."
msgid "Utility.TooManyCalls"
msgstr "Too many calls batched."
msgid "Treasury.InvalidIndex"
msgstr "No proposal, bounty or spend at that index."
msgid "Treasury.TooManyApprovals"
msgstr "Too many approvals in the queue."
msgid "Treasury.InsufficientPermission"
msgstr "The spend origin is valid but the amount it is allowed to spend is lower than the
amount to be spent."
msgid "Treasury.ProposalNotApproved"
msgstr "Proposal has not been approved."
msgid "Treasury.FailedToConvertBalance"
msgstr "The balance of the asset kind is not convertible to the balance of the native asset."
msgid "Treasury.SpendExpired"
msgstr "The spend has expired and cannot be claimed."
msgid "Treasury.EarlyPayout"
msgstr "The spend is not yet eligible for payout."
msgid "Treasury.AlreadyAttempted"
msgstr "The payment has already been attempted."
msgid "Treasury.PayoutError"
msgstr "There was some issue with the mechanism of payment."
msgid "Treasury.NotAttempted"
msgstr "The payout was not yet attempted/claimed."
msgid "Treasury.Inconclusive"
msgstr "The payment has neither failed nor succeeded yet."
Original line number Diff line number Diff line
@@ -2,7 +2,7 @@

This is a beginner tutorial for those who do not have a previous experience with Rust ecosystem or need guidance to get familiar with Duniter v2s project. You'll need a development machine with an internet connection, at least **20 GB of free storage**, and **an hour or two** depending on your computing power.

This walkthrough is based on the following video (french), don't hesitate to record an english voicecover if you feel so.
This walkthrough is based on the following video (french), don't hesitate to record an english voiceover if you feel so.

[![preview](https://tube.p2p.legal/lazy-static/previews/654006dc-66c0-4e37-a32f-b7b5a1c13213.jpg)](https://tube.p2p.legal/w/n4TXxQ4SqxzpHPY4TNMXFu)

@@ -13,7 +13,7 @@ This walkthrough is based on the following video (french), don't hesitate to rec
If you are on a debian based system, you can install the required packages with:

```bash
sudo apt install cmake pkg-config libssl-dev git build-essential clang libclang-dev curl
sudo apt install cmake pkg-config libssl-dev git build-essential clang libclang-dev curl protobuf-compiler
```

Else, look at the corresponding section in the [system setup documentation](./setup.md).
+26 −0
Original line number Diff line number Diff line
# Compilation

Duniter is compiled using the Rust compiler. For a general overview, refer to the [Rustc Dev Guide](https://rustc-dev-guide.rust-lang.org/overview.html).

Substrate and Duniter provide a set of features enabling or disabling parts of the code using conditional compilation. More information on conditional compilation can be found [here](https://doc.rust-lang.org/reference/conditional-compilation.html), or by enabling or disabling compilation of packages. Below is a list of all available features:

## External

- **runtime-benchmarks**: Compiles the runtime with benchmarks for extrinsics benchmarking.
- **try-runtime**: Compiles the runtime for tests and verifies operations in a simulated environment.
- **std**: Enables the Rust standard library.

## Duniter

- **gdev**: Sets `gdev-runtime` and `std` used to build the development chain.
- **gtest**: Sets `gtest-runtime` and `std` used to build the test chain.
- **g1**: Sets `g1-runtime` and `std` used to build the production chain.
- **constant-fees**: Uses a constant and predictable weight-to-fee conversion only for testing.
- **embed**: Enables hardcoded live chainspecs loaded from "../specs/gtest-raw.json" file.
- **native**: Compiles the runtime into native-platform executable only for debugging purposes.

Note: By default, Duniter will be compiled using the `gdev` feature and including the compilation of the distance oracle. Since the three Duniter chains are mutually exclusive, it is mandatory to disable the default feature to compile `gtest` and `g1` as follows:

- `cargo build --no-default-features --features gtest`
- `cargo build --no-default-features --features g1`
- `cargo build --no-default-features -p distance-oracle --features std`
+35 −0
Original line number Diff line number Diff line
# Duniter Pallet Conventions

## Call

Custom Duniter pallet calls should adhere to the standard Substrate naming convention:

- `action_` for regular calls (e.g., `create_identity`).
- `force_action_` for calls with a privileged origin (e.g., `force_set_distance_status`).

## Error

In the event of a call failure, it should trigger a pallet error with a self-explanatory name, for instance, `IdtyNotFound`.

## Event

Successful calls should deposit a system event to notify external entities of the change. The event name should be self-explanatory and structured in the form of a Rust struct with named fields, ensuring clarity in autogenerated documentation. An example is:

```rust
IdtyRemoved {
    idty_index: T::IdtyIndex,
    reason: IdtyRemovalReason<T::IdtyRemovalOtherReason>,
}
```

## Hook

Hooks are inherently infallible, and no errors should be emitted within them. To monitor progression from inside the hook, events can be employed to inform external entities about changes or no-changes.

## Internal Function

Internal functions should adhere to the following naming convention:

- `do_action_` for regular functions executing the base logic of a call (e.g., `do_remove_identity_`). These functions should directly emit events and trigger errors as needed.
- `force_action_` for privileged functions that bypass any checks. This can be useful for specific benchmarking functions.
- `check_` for functions performing checks and triggering errors in case of failure.

docs/dev/release.md

0 → 100644
+284 −0

File added.

Preview size limit exceeded, changes collapsed.

Original line number Diff line number Diff line
# How to replay a block

WARN: try-runtime is not properly implemented

You can use `try-runtime` subcommand to replay a block against a real state from a live network.

1. Checkout the git tag of the runtime version at the block you want to replay
@@ -9,13 +11,13 @@ You can use `try-runtime` subcommand to replay a block against a real state from
5. Replay the block a first time to get the state:

```
duniter try-runtime --exectuion=Native execute-block --block-at 0x2633026e3e428b010cfe08d215b6253843a9fe54db28748ca56de37e6a83c644 live -s tmp/snapshot1 -u ws://localhost:9944
duniter try-runtime --execution=Native execute-block --block-at 0x2633026e3e428b010cfe08d215b6253843a9fe54db28748ca56de37e6a83c644 live -s tmp/snapshot1 -u ws://localhost:9944
```

6. Then, replay the block as many times as you need against your local snapshot:

```
duniter try-runtime --exectuion=Native execute-block --block-at 0x2633026e3e428b010cfe08d215b6253843a9fe54db28748ca56de37e6a83c644 --block-ws-uri ws://localhost:9944 snap -s tmp/snapshot1
duniter try-runtime --execution=Native execute-block --block-at 0x2633026e3e428b010cfe08d215b6253843a9fe54db28748ca56de37e6a83c644 --block-ws-uri ws://localhost:9944 snap -s tmp/snapshot1
```

try-runtime does not allow (for now) to store the block locally, only the storage can be stored.
+30 −13

File changed.

Preview size limit exceeded, changes collapsed.

Original line number Diff line number Diff line
# Upgrade Substrate
# Polkadot Upgrade Guide

We need to keep up to date with Substrate. Here is an empirical guide.
ParityTech frequently releases upgrades of the polkadot-sdk. For each upgrade, Duniter should be upgraded following the instructions below. These instructions are based on upgrading from version 1.8.0 to 1.9.0.

Let's say for the example that we want to upgrade from `v0.9.26` to `v0.9.32`.
## 1. Upgrade the duniter-polkadot-sdk

## Upgrade Substrate fork
* Clone the repository: `git clone git@github.com:duniter/duniter-polkadot-sdk.git`
* Set the upstream repository: `git remote add upstream git@github.com:paritytech/polkadot-sdk.git`
* Fetch the latest released version: `git fetch --tag polkadot-v1.9.0`
* Create a new branch: `git checkout -b duniter-polkadot-v1.9.0`
* Rebase the branch, keeping only specific commits: "fix treasury benchmarks when no SpendOrigin", "allow manual seal to produce non-empty blocks with BABE", "add custom pallet-balance GenesisConfig", and "remove pallet-balances upgrade_account extrinsic", "remove all paritytech sdk dependencies".
* Push the new branch: `git push`

TBD (only Élois has done this for now)
## 2. Upgrade repository

## Upgrade Subxt fork
* In the `Cargo.toml` file of Duniter, change the version number from 1.8.0 to 1.9.0 for all polkadot-sdk dependencies. Also, change the version for Subxt. `find . -type f -name "Cargo.toml" -exec sed -i'' -e 's/polkadot-v1.8.0\/polkadot-v1.9.0/g' {} +`.
* Upgrade the version number of all crateio dependencies to ensure compatibility with those used in the polkadot-sdk, see the node template at: [Node Template](https://github.com/paritytech/polkadot-sdk/blob/master/templates/solochain/node/Cargo.toml) (choose the correct branch/tag).

1. Checkout the currently used branch in [our Subxt fork](https://github.com/duniter/subxt), e.g. `duniter-substrate-v0.9.26`
2. Create a new branch `duniter-substrate-v0.9.32`
3. Fetch the [upstream repository](https://github.com/paritytech/subxt)
4. Rebase on an upstream stable branch matching the wanted version
At this point, two cases may arise:

## Upgrade Duniter
1. If the upgrade only adds some types and minor changes, add the types in the pallet configuration, replace the offending `WeightInfo`, and delete the corresponding weights files until they can be regenerated.

1. Replace `duniter-substrate-v0.9.26` with `duniter-substrate-v0.9.32` in `Cargo.toml`
2. Update the `rust-toolchain` file according to [Polkadot release notes](https://github.com/paritytech/polkadot/releases)
	* Tip: To save storage space on your machine, do `rm target -r` after changing the rust toolchain version and before re-building the project with the new version.
3. While needed, iterate `cargo check`, `cargo update` and upgrading dependencies to match substrate's dependencies
4. Fix errors in Duniter code
	* You may need to check how Polkadot is doing by searching in [their repo](https://github.com/paritytech/polkadot). Luckily, the project structure and Substrate patterns are close enough to ours.
	* Some errors may happen due to two semver-incompatible versions of a same crate being used. To check this, use `cargo tree -i <crate>`. Update the dependency accordingly, then do `cargo update`.
5. As always, don't forget to `clippy` once you're done with the errors.
6. Test benchmarking:  
	`cargo run --features runtime-benchmarks -- benchmark overhead --chain=dev --execution=wasm --wasm-execution=interpreted-i-know-what-i-do --weight-path=. --warmup=10 --repeat=100`
 No newline at end of file
2. If there are many breaking changes, it is recommended to break down the process:

    * Start by correcting errors on individual pallets using `cargo check -p my_pallet` to identify and rectify any errors. Then, test using `cargo test -p my_pallet` and benchmark using `cargo test -p my_pallet --feature runtime-benchmark`.
    * After correcting all pallets, fix the runtimes using the same approach: check for trait declarations added or removed in each pallet configuration, and use `cargo check -p runtime`, `cargo test -p runtime`, and `cargo test -p runtime --feature runtime-benchmark`.
    * Repeat this process with the node part, the distance-oracle, all the tests, xtask, and the client.
    * Conclude the process by executing all benchmarks using the command `scripts/run_all_benchmarks.sh`.

## 4. Troubleshooting

As Duniter may sometimes be the only chain implementing advanced features, such as manual sealing, not many references can be found. However, the following projects may be useful:

* Node template for general up-to-date implementation: [Node Template](https://github.com/paritytech/polkadot-sdk/tree/master/templates)
* Acala: [Acala](https://github.com/AcalaNetwork/Acala), which also uses manual sealing add a similar node implementation.

docs/user/distance.md

0 → 100644
+29 −0

File added.

Preview size limit exceeded, changes collapsed.

docs/user/fees.md

0 → 100644
+35 −0

File added.

Preview size limit exceeded, changes collapsed.

docs/user/rpc.md

deleted100644 → 0
+0 −78

File deleted.

Preview size limit exceeded, changes collapsed.

docs/user/smith.md

deleted100644 → 0
+0 −102

File deleted.

Preview size limit exceeded, changes collapsed.

File changed.

Preview size limit exceeded, changes collapsed.

File changed.

Preview size limit exceeded, changes collapsed.

node/Cargo.toml

0 → 100644
+242 −0

File added.

Preview size limit exceeded, changes collapsed.

node/README.md

0 → 100644
+3 −0
Original line number Diff line number Diff line
# Duniter Node

You can find the autogenerated documentation at: [https://doc-duniter-org.ipns.pagu.re/duniter/index.html](https://doc-duniter-org.ipns.pagu.re/duniter/index.html).

node/specs/gdev-raw.json

deleted100644 → 0
+0 −95622

File deleted.

Preview size limit exceeded, changes collapsed.

+54 −9

File changed.

Preview size limit exceeded, changes collapsed.

+161 −162

File changed.

Preview size limit exceeded, changes collapsed.

+49 −0

File changed.

Preview size limit exceeded, changes collapsed.

File changed.

Preview size limit exceeded, changes collapsed.

+98 −15

File changed.

Preview size limit exceeded, changes collapsed.

+460 −340

File changed.

Preview size limit exceeded, changes collapsed.

+14 −12

File changed.

Preview size limit exceeded, changes collapsed.

pallets/identity/README.md

deleted100644 → 0
+0 −43

File deleted.

Preview size limit exceeded, changes collapsed.

+61 −0

File added.

Preview size limit exceeded, changes collapsed.

+418 −0

File added.

Preview size limit exceeded, changes collapsed.

File changed.

Preview size limit exceeded, changes collapsed.

resources/g1-data.json

0 → 100644
+280074 −0

File added.

Preview size limit exceeded, changes collapsed.

resources/gdev.json

deleted100644 → 0
+0 −204

File deleted.

Preview size limit exceeded, changes collapsed.

resources/gdev.yaml

0 → 100644
+70 −0

File added.

Preview size limit exceeded, changes collapsed.

resources/gtest.yaml

0 → 100644
+35 −0

File added.

Preview size limit exceeded, changes collapsed.

File changed.

Preview size limit exceeded, changes collapsed.

+236 −160

File changed.

Preview size limit exceeded, changes collapsed.

runtime/g1/README.md

0 → 100644
+4 −0

File added.

Preview size limit exceeded, changes collapsed.

File changed.

Preview size limit exceeded, changes collapsed.

File changed.

Preview size limit exceeded, changes collapsed.

runtime/gdev/README.md

0 → 100644
+5 −0

File added.

Preview size limit exceeded, changes collapsed.

File changed.

Preview size limit exceeded, changes collapsed.

+10 −0

File added.

Preview size limit exceeded, changes collapsed.

rustfmt.toml

0 → 100644
+4 −0

File added.

Preview size limit exceeded, changes collapsed.

scripts/build-deb.sh

0 → 100755
+6 −0

File added.

Preview size limit exceeded, changes collapsed.

+26 −21

File changed.

Preview size limit exceeded, changes collapsed.

+32 −1

File changed.

Preview size limit exceeded, changes collapsed.

+11 −0

File added.

Preview size limit exceeded, changes collapsed.

File changed.

Preview size limit exceeded, changes collapsed.

+22 −0

File added.

Preview size limit exceeded, changes collapsed.

xtask/src/gen_calls_doc.rs

deleted100644 → 0
+0 −284

File deleted.

Preview size limit exceeded, changes collapsed.

xtask/src/gen_doc.rs

0 → 100644
+807 −0

File added.

Preview size limit exceeded, changes collapsed.

xtask/src/gitlab.rs

0 → 100644
+358 −0

File added.

Preview size limit exceeded, changes collapsed.

+318 −18

File changed.

Preview size limit exceeded, changes collapsed.

+20 −0

File added.

Preview size limit exceeded, changes collapsed.

+18 −0

File added.

Preview size limit exceeded, changes collapsed.