Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • nodes/rust/duniter-v2s
  • llaq/lc-core-substrate
  • pini-gh/duniter-v2s
  • vincentux/duniter-v2s
  • mildred/duniter-v2s
  • d0p1/duniter-v2s
  • bgallois/duniter-v2s
  • Nicolas80/duniter-v2s
8 results
Show changes
Commits on Source (65)
Showing
with 11609 additions and 5119 deletions
[alias]
sanity-gdev = "test -p duniter-live-tests --test sanity_gdev -- --nocapture"
tu = "test --workspace --exclude duniter-end2end-tests --exclude duniter-live-tests --features constant-fees"
sanity-gdev = "test -Zgit=shallow-deps -p duniter-live-tests --test sanity_gdev -- --nocapture"
tu = "test -Zgit=shallow-deps --workspace --exclude duniter-end2end-tests --exclude duniter-live-tests --features constant-fees" # Unit tests with constant-fees
tf = "test -Zgit=shallow-deps --workspace --exclude duniter-end2end-tests --exclude duniter-live-tests test_fee" # Custom fee model tests
# `te` and `cucumber` are synonyms
te = "test -p duniter-end2end-tests --test cucumber_tests --features constant-fees --"
cucumber-build = "build --features constant-fees"
cucumber = "test -p duniter-end2end-tests --test cucumber_tests --"
ta = "test --workspace --exclude duniter-live-tests --features constant-fees"
tb = "test --features runtime-benchmarks -p"
rbp = "run --release --features runtime-benchmarks -- benchmark pallet --chain=dev --steps=50 --repeat=20 --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=. --pallet"
xtask = "run --package xtask --"
cucumber-node = "run -- --chain=gdev_dev --execution=Native --sealing=manual --force-authoring --rpc-cors=all --tmp --ws-port 9944 --alice --features constant-fees"
cucumber-build = "build -Zgit=shallow-deps --features constant-fees"
cucumber = "test -Zgit=shallow-deps -p duniter-end2end-tests --test cucumber_tests --"
ta = "test -Zgit=shallow-deps --workspace --exclude duniter-live-tests --features constant-fees"
tb = "test -Zgit=shallow-deps --features runtime-benchmarks -p"
rbp = "run -Zgit=shallow-deps --release --features runtime-benchmarks -- benchmark pallet --chain=dev --steps=50 --repeat=20 --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=. --pallet"
xtask = "run -Zgit=shallow-deps --package xtask --"
cucumber-node = "run -Zgit=shallow-deps -- --chain=gdev_dev --execution=Native --sealing=manual --force-authoring --rpc-cors=all --tmp --rpc-port 9944 --alice --features constant-fees"
This diff is collapsed.
......@@ -29,7 +29,7 @@ USER duniter
# check if executable works in this container
RUN /usr/local/bin/duniter --version
EXPOSE 30333 9933 9944
EXPOSE 30333 9944
VOLUME ["/duniter"]
ENTRYPOINT ["/usr/local/bin/duniter"]
This diff is collapsed.
This diff is collapsed.
......@@ -10,34 +10,46 @@
<img alt="logov2" src="https://duniter.fr/img/duniterv2.svg" width="128" height="128"/>
</div>
## Documentation TOC
## Documentation
- [README](./README.md)
Multiple documentation sources are available depending on the level of detail you need.
- Full technical Rust doc (auto-generated with `cargo xtask gen-doc`) : https://doc-duniter-org.ipns.pagu.re/duniter/
- User and client developer doc (official website) : https://duniter.org/wiki/duniter-v2/
- Internal documentation (within git repository), see table of contents below : [./doc](./doc)
### Internal documentation TOC
- [README](./README.md) (this file)
- [Use](#use)
- [Test](#test)
- [Contribute](#contribute)
- [Structure](#project-structure)
- [docker](./docker/) docker-related documentation
- [docs](./docs/)
- [api](./docs/api/)
- [manual](./docs/api/manual.md)
- [License](#license)
- [docs](./docs/) internal documentation
- [api](./docs/api/) API
- [manual](./docs/api/manual.md) manage account and identities
- [runtime-calls](./docs/api/runtime-calls.md) the calls you can submit through the RPC API
- [dev](./docs/dev/)
- [runtime-errors](./docs/api/runtime-errors.md) the errors you can get submitting a call
- [runtime-events](./docs/api/runtime-events.md) the events you can get submitting a call
- [dev](./docs/dev/) developer documentation
- [beginner-walkthrough](./docs/dev/beginner-walkthrough.md)
- [git-conventions](./docs/dev/git-conventions.md)
- [pallet_conventions](./docs/dev/pallet_conventions.md)
- [launch-a-live-network](./docs/dev/launch-a-live-network.md)
- [setup](./docs/dev/setup.md)
- [compilation features](./docs/dev/compilation.md)
- [verify-runtime-code](./docs/dev/verify-runtime-code.md)
- [weights-benchmarking](./docs/dev/weights-benchmarking.md)
- [upgrade-substrate](./docs/dev/upgrade-substrate.md)
- [test](./docs/test/)
- [replay-block](./docs/test/replay-block.md)
- [user](./docs/user/)
- [user](./docs/user/) user documentation
- [autocompletion](./docs/user/autocompletion.md)
- [build-for-arm](./docs/user/build-for-arm.md)
- [mirror](./docs/user/mirror.md) deploy a permanent ǦDev mirror node
- [smith](./docs/user/smith.md) deploy a permanent ǦDev validator node
- [debian installation](./docs/user/installation_debian.md)
- [distance](./docs/user/distance.md)
- [fees](./docs/user/fees.md)
- [packaging](./docs/packaging/) packaging
- [build-for-arm](./docs/packaging/build-for-arm.md) build for ARM architecture
- [build-debian](./docs/packaging/build-deb.md) build a native Debian package
- [docker](./docker/) docker-related documentation
- [end2end-tests](./end2end-tests/) automated end to end tests written with cucumber
- [live-tests](./live-tests/) sanity checks to test the storage of a live chain
......@@ -47,23 +59,23 @@
The easiest way is to use the docker image.
Minimal command to deploy a **temporary** mirror peer:
Minimal command to deploy a temporary mirror peer:
```docker
docker run -it -p9944:9944 -e DUNITER_CHAIN_NAME=gdev duniter/duniter-v2s:v0.4.0 --tmp --execution=Wasm
docker run -it -p9944:9944 -e DUNITER_CHAIN_NAME=gdev duniter/duniter-v2s-gdev-800:latest
```
To go further, read [How to deploy a permanent mirror node on ĞDev network](./docs/user/rpc.md).
To go further, read [How to deploy a permanent mirror node on ĞDev network 🔗](https://duniter.org/wiki/duniter-v2/#run-a-mirror-node).
### Create your local blockchain
It can be useful to deploy your local blockchain, for instance to have a controlled environment to develop/test an application that interacts with the blockchain.
```docker
docker run -it -p9944:9944 duniter/duniter-v2s:v0.4.0 --tmp
docker run -it -p9944:9944 duniter/duniter-v2s-gdev-800:latest
```
Or use the `docker-compose.yml` at the root of this repository.
Or use the [`docker-compose.yml`](./docker-compose.yml) at the root of this repository.
#### Control when your local blockchain should produce blocks
......@@ -74,34 +86,9 @@ You can decide when to produce blocks with the cli option `--sealing` which has
- `--sealing=instant`: produce a block immediately upon receiving a transaction into the transaction pool
- `--sealing=manual`: produce a block upon receiving an RPC request (method `engine_createBlock`).
### Autocompletion
See [autocompletion](./docs/user/autocompletion.md).
### Shell autocompletion
## Test
### Test a specific commit
At each commit on master, an image with the tag `debug-sha-********` is published, where `********`
corresponds to the first 8 hash characters of the commit.
Usage:
```docker
docker run -it -p9944:9944 --name duniter-v2s duniter/duniter-v2s:debug-sha-b836f1a6
```
Then open `https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944` in a browser.
Enable detailed logging:
```docker
docker run -it -p9944:9944 --name duniter-v2s \
-e RUST_LOG=debug \
-e RUST_BACKTRACE=1 \
-lruntime=debug \
duniter/duniter-v2s:debug-sha-b836f1a6
```
See [autocompletion](./docs/user/autocompletion.md) to generate shell autocompletion for duniter commands.
## Contribute
......@@ -128,20 +115,11 @@ cargo build
Use Rust's native `cargo` command to build and launch the node:
```sh
cargo run -- --dev --tmp
cargo run -- --dev
```
This will deploy a local blockchain with test accounts (Alice, Bob, etc) in the genesis.
## Single-Node Development Chain
This command will start the single-node development chain with persistent state:
```bash
./target/debug/duniter --dev --tmp
```
Then open `https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944` in a browser.
Open `https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944` to watch and interact with your node.
Start the development chain with detailed logging:
......@@ -149,140 +127,11 @@ Start the development chain with detailed logging:
RUST_LOG=debug RUST_BACKTRACE=1 ./target/debug/duniter -lruntime=debug --dev
```
## Multi-Node Local Testnet
If you want to see the multi-node consensus algorithm in action, refer to
[our Start a Private Network tutorial](https://substrate.dev/docs/en/tutorials/start-a-private-network/).
### Purge previous local testnet
```
./target/debug/duniter purge-chain --base-path /tmp/alice --chain local
./target/debug/duniter purge-chain --base-path /tmp/bob --chain local
```
### Start Alice's node
```bash
./target/debug/duniter \
--base-path /tmp/alice \
--chain local \
--alice \
--port 30333 \
--ws-port 9945 \
--rpc-port 9933 \
--node-key 0000000000000000000000000000000000000000000000000000000000000001 \
--validator
```
### Start Bob's node
## License
```bash
./target/debug/duniter \
--base-path /tmp/bob \
--chain local \
--bob \
--port 30334 \
--ws-port 9946 \
--rpc-port 9934 \
--validator \
--bootnodes /ip4/127.0.0.1/tcp/30333/p2p/12D3KooWEyoppNCUx8Yx66oV9fJnriXwCcXwDDUA2kj6vnc6iDEp
```
See [LICENSE](./LICENSE)
## Project Structure
A Substrate project such as this consists of a number of components that are spread across a few
directories.
### Node
A blockchain node is an application that allows users to participate in a blockchain network.
Substrate-based blockchain nodes expose a number of capabilities:
- Networking: Substrate nodes use the [`libp2p`](https://libp2p.io/) networking stack to allow the
nodes in the network to communicate with one another.
- Consensus: Blockchains must have a way to come to
[consensus](https://substrate.dev/docs/en/knowledgebase/advanced/consensus) on the state of the
network. Substrate makes it possible to supply custom consensus engines and also ships with
several consensus mechanisms that have been built on top of
[Web3 Foundation research](https://research.web3.foundation/en/latest/polkadot/NPoS/index.html).
- RPC Server: A remote procedure call (RPC) server is used to interact with Substrate nodes.
There are several files in the `node` directory - take special note of the following:
- [`chain_spec.rs`](./node/src/chain_spec.rs): A
[chain specification](https://substrate.dev/docs/en/knowledgebase/integrate/chain-spec) is a
source code file that defines a Substrate chain's initial (genesis) state. Chain specifications
are useful for development and testing, and critical when architecting the launch of a
production chain. Take note of the `development_chain_spec` and `testnet_genesis` functions, which
are used to define the genesis state for the local development chain configuration. These
functions identify some
[well-known accounts](https://substrate.dev/docs/en/knowledgebase/integrate/subkey#well-known-keys)
and use them to configure the blockchain's initial state.
- [`service.rs`](./node/src/service.rs): This file defines the node implementation. Take note of
the libraries that this file imports and the names of the functions it invokes. In particular,
there are references to consensus-related topics, such as the
[longest chain rule](https://substrate.dev/docs/en/knowledgebase/advanced/consensus#longest-chain-rule),
the [Babe](https://substrate.dev/docs/en/knowledgebase/advanced/consensus#babe) block authoring
mechanism and the
[GRANDPA](https://substrate.dev/docs/en/knowledgebase/advanced/consensus#grandpa) finality
gadget.
After the node has been [built](#build), refer to the embedded documentation to learn more about the
capabilities and configuration parameters that it exposes:
```shell
./target/debug/duniter --help
```
### Runtime
In Substrate, the terms
"[runtime](https://substrate.dev/docs/en/knowledgebase/getting-started/glossary#runtime)" and
"[state transition function](https://substrate.dev/docs/en/knowledgebase/getting-started/glossary#stf-state-transition-function)"
are analogous - they refer to the core logic of the blockchain that is responsible for validating
blocks and executing the state changes they define. The Substrate project in this repository uses
the [FRAME](https://substrate.dev/docs/en/knowledgebase/runtime/frame) framework to construct a
blockchain runtime. FRAME allows runtime developers to declare domain-specific logic in modules
called "pallets". At the heart of FRAME is a helpful
[macro language](https://substrate.dev/docs/en/knowledgebase/runtime/macros) that makes it easy to
create pallets and flexibly compose them to create blockchains that can address
[a variety of needs](https://www.substrate.io/substrate-users/).
Review the [FRAME runtime implementation](./runtime/src/lib.rs) included in this template and note
the following:
- This file configures several pallets to include in the runtime. Each pallet configuration is
defined by a code block that begins with `impl $PALLET_NAME::Config for Runtime`.
- The pallets are composed into a single runtime by way of the
[`construct_runtime!`](https://crates.parity.io/frame_support/macro.construct_runtime.html)
macro, which is part of the core
[FRAME Support](https://substrate.dev/docs/en/knowledgebase/runtime/frame#support-library)
library.
### Pallets
The runtime in this project is constructed using many FRAME pallets that ship with the
[core Substrate repository](https://github.com/paritytech/substrate/tree/master/frame) and a
template pallet that is [defined in the `pallets`](./pallets/template/src/lib.rs) directory.
A FRAME pallet is compromised of a number of blockchain primitives:
- Storage: FRAME defines a rich set of powerful
[storage abstractions](https://substrate.dev/docs/en/knowledgebase/runtime/storage) that makes
it easy to use Substrate's efficient key-value database to manage the evolving state of a
blockchain.
- Dispatchables: FRAME pallets define special types of functions that can be invoked (dispatched)
from outside of the runtime in order to update its state.
- Events: Substrate uses [events](https://substrate.dev/docs/en/knowledgebase/runtime/events) to
notify users of important changes in the runtime.
- Errors: When a dispatchable fails, it returns an error.
- Config: The `Config` configuration interface is used to define the types and parameters upon
which a FRAME pallet depends.
## License
CopyLeft 2021-2023 Axiom-Team
Some parts borrowed from Polkadot (Parity Technologies (UK) Ltd.)
......@@ -298,3 +147,4 @@ GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with Duniter-v2S. If not, see <https://www.gnu.org/licenses/>.
```
......@@ -13,14 +13,10 @@ targets = ["x86_64-unknown-linux-gnu"]
[features]
std = [
"codec/std",
"frame-support/std",
"log/std",
"pallet-distance/std",
"scale-info/std",
"sp-core/std",
"sp-distance/std",
"sp-keystore/std",
"sp-runtime/std",
]
runtime-benchmarks = [
......@@ -36,14 +32,11 @@ try-runtime = [
]
[dependencies]
codec = { workspace = true, features = ["derive"] }
frame-support = { workspace = true }
log = { workspace = true }
pallet-distance = { workspace = true }
sc-client-api = { workspace = true }
scale-info = { workspace = true, features = ["derive"] }
sp-core = { workspace = true }
sp-distance = { workspace = true }
sp-keystore = { workspace = true }
sp-runtime = { workspace = true }
thiserror = { workspace = true }
# Distance Oracle Inherent Data Provider
You can find the autogenerated documentation at: [https://doc-duniter-org.ipns.pagu.re/dc_distance/index.html](https://doc-duniter-org.ipns.pagu.re/dc_distance/index.html).
......@@ -14,13 +14,37 @@
// You should have received a copy of the GNU Affero General Public License
// along with Substrate-Libre-Currency. If not, see <https://www.gnu.org/licenses/>.
use codec::{Decode, Encode};
//! # Distance Oracle Inherent Data Provider
//!
//! This crate provides functionality for creating an **inherent data provider**
//! specifically designed for the "Distance Oracle".
//! The inherent data provider is responsible for fetching and delivering
//! computation results required for the runtime to process distance evaluations.
//!
//! ## Relationship with Distance Oracle
//!
//! The **distance-oracle** is responsible for computing distance evaluations,
//! storing the results to be read in the next period, and saving them to files.
//! These files are then read by **this inherent data provider**
//! to provide the required data to the runtime.
//!
//! ## Overview
//!
//! - Retrieves **period index** and **evaluation results** from the storage and file system.
//! - Determines whether the computation results for the current period have already been published.
//! - Reads and parses evaluation result files when available, providing the necessary data to the runtime.
use frame_support::pallet_prelude::*;
use sc_client_api::{ProvideUncles, StorageKey, StorageProvider};
use scale_info::TypeInfo;
use sp_runtime::{generic::BlockId, traits::Block as BlockT, AccountId32};
use std::path::PathBuf;
/// The file version that should match the distance oracle one.
/// This ensures that the smith avoids accidentally submitting invalid data
/// in case there are changes in logic between the runtime and the oracle,
/// thereby preventing potential penalties.
const VERSION_PREFIX: &str = "001-";
type IdtyIndex = u32;
#[derive(Debug, thiserror::Error)]
......@@ -35,38 +59,42 @@ pub fn create_distance_inherent_data_provider<B, C, Backend>(
parent: B::Hash,
distance_dir: PathBuf,
owner_keys: &[sp_core::sr25519::Public],
) -> Result<sp_distance::InherentDataProvider<IdtyIndex>, sc_client_api::blockchain::Error>
) -> sp_distance::InherentDataProvider<IdtyIndex>
where
B: BlockT,
C: ProvideUncles<B> + StorageProvider<B, Backend>,
Backend: sc_client_api::Backend<B>,
IdtyIndex: Decode + Encode + PartialEq + TypeInfo,
{
let &[owner_key] = owner_keys else {
log::error!("🧙 [distance oracle] Expected exactly one Babe owner key, found {}: oracle cannot work", owner_keys.len());
return Ok(sp_distance::InherentDataProvider::<IdtyIndex>::new(None));
};
let owner_key = sp_runtime::AccountId32::new(owner_key.0);
let pool_index = client
// Retrieve the period_index from storage.
let period_index = client
.storage(
parent,
&StorageKey(
frame_support::storage::storage_prefix(b"Distance", b"CurrentPoolIndex").to_vec(),
frame_support::storage::storage_prefix(b"Distance", b"CurrentPeriodIndex").to_vec(),
),
)
.expect("CurrentIndex is Err")
.map_or(0, |raw| {
u32::decode(&mut &raw.0[..]).expect("cannot decode CurrentIndex")
});
.ok()
.flatten()
.and_then(|raw| u32::decode(&mut &raw.0[..]).ok());
// Return early if the storage is inaccessible or the data is corrupted.
let period_index = match period_index {
Some(index) => index,
None => {
log::error!("🧙 [distance inherent] PeriodIndex decoding failed.");
return sp_distance::InherentDataProvider::<IdtyIndex>::new(None);
}
};
// Retrieve the published_results from storage.
let published_results = client
.storage(
parent,
&StorageKey(
frame_support::storage::storage_prefix(
b"Distance",
match pool_index {
match period_index % 3 {
0 => b"EvaluationPool0",
1 => b"EvaluationPool1",
2 => b"EvaluationPool2",
......@@ -75,42 +103,84 @@ where
)
.to_vec(),
),
)?
.map_or_else(Default::default, |raw| {
pallet_distance::EvaluationPool::<AccountId32, IdtyIndex>::decode(&mut &raw.0[..])
.expect("cannot decode EvaluationPool")
)
.ok()
.flatten()
.and_then(|raw| {
pallet_distance::EvaluationPool::<AccountId32, IdtyIndex>::decode(&mut &raw.0[..]).ok()
});
// Return early if the storage is inaccessible or the data is corrupted.
let published_results = match published_results {
Some(published_results) => published_results,
None => {
log::info!("🧙 [distance inherent] No published result at this block.");
return sp_distance::InherentDataProvider::<IdtyIndex>::new(None);
}
};
// Find the account associated with the BABE key that is in our owner keys.
let mut local_account = None;
for key in owner_keys {
// Session::KeyOwner is StorageMap<_, Twox64Concat, (KeyTypeId, Vec<u8>), AccountId32, OptionQuery>
// Slices (variable length) and array (fixed length) are encoded differently, so the `.as_slice()` is needed
let item_key = (sp_runtime::KeyTypeId(*b"babe"), key.0.as_slice()).encode();
let mut storage_key =
frame_support::storage::storage_prefix(b"Session", b"KeyOwner").to_vec();
storage_key.extend_from_slice(&sp_core::twox_64(&item_key));
storage_key.extend_from_slice(&item_key);
if let Some(raw_data) = client
.storage(parent, &StorageKey(storage_key))
.ok()
.flatten()
{
if let Ok(key_owner) = AccountId32::decode(&mut &raw_data.0[..]) {
local_account = Some(key_owner);
break;
} else {
log::warn!("🧙 [distance inherent] Cannot decode key owner value");
}
}
}
// Have we already published a result for this period?
if published_results.evaluators.contains(&owner_key) {
log::debug!("🧙 [distance oracle] Already published a result for this period");
return Ok(sp_distance::InherentDataProvider::<IdtyIndex>::new(None));
if let Some(local_account) = local_account {
if published_results.evaluators.contains(&local_account) {
log::debug!("🧙 [distance inherent] Already published a result for this period");
return sp_distance::InherentDataProvider::<IdtyIndex>::new(None);
}
} else {
log::error!("🧙 [distance inherent] Cannot find our BABE owner key");
return sp_distance::InherentDataProvider::<IdtyIndex>::new(None);
}
// Read evaluation result from file, if it exists
log::debug!(
"🧙 [distance oracle] Reading evaluation result from file {:?}",
distance_dir.clone().join(pool_index.to_string())
"🧙 [distance inherent] Reading evaluation result from file {:?}",
distance_dir.clone().join(period_index.to_string())
);
let evaluation_result = match std::fs::read(distance_dir.join(pool_index.to_string())) {
let evaluation_result = match std::fs::read(
distance_dir.join(VERSION_PREFIX.to_owned() + &period_index.to_string()),
) {
Ok(data) => data,
Err(e) => {
match e.kind() {
std::io::ErrorKind::NotFound => {
log::debug!("🧙 [distance oracle] Evaluation result file not found");
log::debug!("🧙 [distance inherent] Evaluation result file not found. Please ensure that the oracle version matches {}", VERSION_PREFIX);
}
_ => {
log::error!(
"🧙 [distance oracle] Cannot read distance evaluation result file: {e:?}"
"🧙 [distance inherent] Cannot read distance evaluation result file: {e:?}"
);
}
}
return Ok(sp_distance::InherentDataProvider::<IdtyIndex>::new(None));
return sp_distance::InherentDataProvider::<IdtyIndex>::new(None);
}
};
log::info!("🧙 [distance oracle] Providing evaluation result");
Ok(sp_distance::InherentDataProvider::<IdtyIndex>::new(Some(
log::info!("🧙 [distance inherent] Providing evaluation result");
sp_distance::InherentDataProvider::<IdtyIndex>::new(Some(
sp_distance::ComputationResult::decode(&mut evaluation_result.as_slice()).unwrap(),
)))
))
}
......@@ -12,27 +12,23 @@ required-features = ["standalone"]
[features]
default = ["standalone", "std"]
# Feature standalone is for CLI executable
standalone = ["clap", "tokio"]
# Feature std is needed
std = [
"codec/std",
"fnv/std",
"hex/std",
"log/std",
"num-traits/std",
"sp-core/std",
"sp-distance/std",
"sp-runtime/std",
]
try-runtime = ["sp-distance/try-runtime", "sp-runtime/try-runtime"]
runtime-benchmarks = []
[dependencies]
clap = { workspace = true, features = ["derive"], optional = true }
codec = { workspace = true }
fnv = { workspace = true }
hex = { workspace = true }
log = { workspace = true }
num-traits = { workspace = true }
rayon = { workspace = true }
simple_logger = { workspace = true }
sp-core = { workspace = true }
......
# Distance oracle
# Distance Oracle
> for explanation about the Duniter web of trust, see https://duniter.org/wiki/web-of-trust/deep-dive-wot/
Distance computation on the Duniter web of trust is an expensive operation that should not be included in the runtime for multiple reasons:
- it could exceed the time available for a block computation
- it takes a lot of resource from the host machine
- the result is not critical to the operation of Ğ1
It is then separated into an other program that the user (a duniter smith) can choose to run or not. This program publishes its result in a inherent and the network selects the median of the results given by the smith who published some.
## Structure
This feature is organized in multiple parts:
- **/distance-oracle/** (here): binary executing the distance algorithm
- **/primitives/distance/**: primitive types used both by client and runtime
- **/client/distance/**: exposes the `create_distance_inherent_data_provider` which provides data to the runtime
- **/pallets/distance/**: distance pallet exposing type, traits, storage/calls/hooks executing in the runtime
## Usage (with Docker)
See [docker-compose.yml](../docker-compose.yml) for an example of how to run the distance oracle with Docker.
Output:
2023-12-09T14:45:05.942Z INFO [distance_oracle] Nothing to do: Pool does not exist
Waiting 1800 seconds before next execution...
\ No newline at end of file
You can find the autogenerated documentation at: [https://doc-duniter-org.ipns.pagu.re/distance_oracle/index.html](https://doc-duniter-org.ipns.pagu.re/distance_oracle/index.html).
......@@ -19,13 +19,14 @@
use crate::runtime;
use log::debug;
use sp_core::H256;
pub type Client = subxt::OnlineClient<crate::RuntimeConfig>;
pub type AccountId = subxt::utils::AccountId32;
pub type IdtyIndex = u32;
pub type EvaluationPool =
runtime::runtime_types::pallet_distance::types::EvaluationPool<AccountId, IdtyIndex>;
pub type H256 = subxt::utils::H256;
pub async fn client(rpc_url: String) -> Client {
pub async fn client(rpc_url: impl AsRef<str>) -> Client {
Client::from_insecure_url(rpc_url)
.await
.expect("Cannot create RPC client")
......@@ -40,11 +41,11 @@ pub async fn parent_hash(client: &Client) -> H256 {
.hash()
}
pub async fn current_pool_index(client: &Client, parent_hash: H256) -> u32 {
pub async fn current_period_index(client: &Client, parent_hash: H256) -> u32 {
client
.storage()
.at(parent_hash)
.fetch(&runtime::storage().distance().current_pool_index())
.fetch(&runtime::storage().distance().current_period_index())
.await
.expect("Cannot fetch current pool index")
.unwrap_or_default()
......@@ -54,7 +55,7 @@ pub async fn current_pool(
client: &Client,
parent_hash: H256,
current_pool_index: u32,
) -> Option<runtime::runtime_types::pallet_distance::types::EvaluationPool<AccountId, IdtyIndex>> {
) -> Option<EvaluationPool> {
client
.storage()
.at(parent_hash)
......@@ -106,17 +107,26 @@ pub async fn member_iter(client: &Client, evaluation_block: H256) -> MemberIter
}
pub struct MemberIter(
subxt::backend::StreamOfResults<(
Vec<u8>,
runtime::runtime_types::sp_membership::MembershipData<u32>,
)>,
subxt::backend::StreamOfResults<
subxt::storage::StorageKeyValuePair<
subxt::storage::StaticAddress<
(),
runtime::runtime_types::sp_membership::MembershipData<u32>,
(),
(),
subxt::utils::Yes,
>,
>,
>,
);
impl MemberIter {
pub async fn next(&mut self) -> Result<Option<IdtyIndex>, subxt::error::Error> {
self.0.next().await.transpose().map(|i| {
i.map(|(storage_key, _membership_data)| idty_id_from_storage_key(&storage_key))
})
self.0
.next()
.await
.transpose()
.map(|i| i.map(|j| idty_id_from_storage_key(&j.key_bytes)))
}
}
......@@ -131,15 +141,29 @@ pub async fn cert_iter(client: &Client, evaluation_block: H256) -> CertIter {
)
}
pub struct CertIter(subxt::backend::StreamOfResults<(Vec<u8>, Vec<(IdtyIndex, u32)>)>);
pub struct CertIter(
subxt::backend::StreamOfResults<
subxt::storage::StorageKeyValuePair<
subxt::storage::StaticAddress<
(),
Vec<(u32, u32)>,
(),
subxt::utils::Yes,
subxt::utils::Yes,
>,
>,
>,
);
impl CertIter {
pub async fn next(
&mut self,
) -> Result<Option<(IdtyIndex, Vec<(IdtyIndex, u32)>)>, subxt::error::Error> {
self.0.next().await.transpose().map(|i| {
i.map(|(storage_key, issuers)| (idty_id_from_storage_key(&storage_key), issuers))
})
self.0
.next()
.await
.transpose()
.map(|i| i.map(|j| (idty_id_from_storage_key(&j.key_bytes), j.value)))
}
}
......
......@@ -14,6 +14,38 @@
// You should have received a copy of the GNU Affero General Public License
// along with Duniter-v2S. If not, see <https://www.gnu.org/licenses/>.
//! # Distance Oracle
//!
//! The **Distance Oracle** is a standalone program designed to calculate the distances between identities in the Duniter Web of Trust (WoT). This process is computationally intensive and is therefore decoupled from the main runtime. It allows smith users to choose whether to run the oracle and provide results to the network.
//!
//! The **oracle** works in conjunction with the **Inherent Data Provider** and the **Distance Pallet** in the runtime to deliver periodic computation results. The **Inherent Data Provider** fetches and supplies these results to the runtime, ensuring that the necessary data for distance evaluations is available to be processed at the appropriate time in the runtime lifecycle.
//!
//! ## Structure
//!
//! The Distance Oracle is organized into the following modules:
//!
//! 1. **`/distance-oracle/`**: Contains the main binary for executing the distance computation.
//! 2. **`/primitives/distance/`**: Defines primitive types shared between the client and runtime.
//! 3. **`/client/distance/`**: Exposes the `create_distance_inherent_data_provider`, which feeds data into the runtime through the Inherent Data Provider.
//! 4. **`/pallets/distance/`**: A pallet that handles distance-related types, traits, storage, and hooks in the runtime, coordinating the interaction between the oracle, inherent data provider, and runtime.
//!
//! ## How it works
//! - The **Distance Pallet** adds an evaluation request at period `i` in the runtime.
//! - The **Distance Oracle** evaluates this request at period `i + 1`, computes the necessary results and stores them on disk.
//! - The **Inherent Data Provider** reads this evaluation result from disk at period `i + 2` and provides it to the runtime to perform the required operations.
//!
//! ## Usage
//!
//! ### Docker Integration
//!
//! To run the Distance Oracle, use the provided Docker setup. Refer to the [docker-compose.yml](../docker-compose.yml) file for an example configuration.
//!
//! Example Output:
//! ```text
//! 2023-12-09T14:45:05.942Z INFO [distance_oracle] Nothing to do: Pool does not exist
//! Waiting 1800 seconds before next execution...
//! ```
#[cfg(not(test))]
pub mod api;
#[cfg(test)]
......@@ -24,17 +56,20 @@ mod tests;
#[cfg(test)]
pub use mock as api;
use api::{AccountId, IdtyIndex};
use api::{AccountId, EvaluationPool, IdtyIndex, H256};
use codec::Encode;
use fnv::{FnvHashMap, FnvHashSet};
use log::{debug, error, info, warn};
use rayon::iter::IntoParallelRefIterator;
use rayon::iter::ParallelIterator;
use std::io::Write;
use std::path::PathBuf;
use log::{debug, info, warn};
use rayon::iter::{IntoParallelRefIterator, ParallelIterator};
use std::{io::Write, path::PathBuf};
/// The file version must match the version used by the inherent data provider.
/// This ensures that the smith avoids accidentally submitting invalid data
/// in case there are changes in logic between the runtime and the oracle,
/// thereby preventing potential penalties.
const VERSION_PREFIX: &str = "001-";
// TODO select metadata file using features
#[subxt::subxt(runtime_metadata_path = "../resources/metadata.scale")]
pub mod runtime {}
......@@ -44,13 +79,14 @@ impl subxt::config::Config for RuntimeConfig {
type Address = subxt::ext::sp_runtime::MultiAddress<Self::AccountId, u32>;
type AssetId = ();
type ExtrinsicParams = subxt::config::substrate::SubstrateExtrinsicParams<Self>;
type Hash = sp_core::H256;
type Hash = subxt::utils::H256;
type Hasher = subxt::config::substrate::BlakeTwo256;
type Header =
subxt::config::substrate::SubstrateHeader<u32, subxt::config::substrate::BlakeTwo256>;
type Signature = subxt::ext::sp_runtime::MultiSignature;
}
/// Represents a tipping amount.
#[derive(Copy, Clone, Debug, Default, Encode)]
pub struct Tip {
#[codec(compact)]
......@@ -69,6 +105,7 @@ impl From<u64> for Tip {
}
}
/// Represents configuration parameters.
pub struct Settings {
pub evaluation_result_dir: PathBuf,
pub rpc_url: String,
......@@ -83,9 +120,18 @@ impl Default for Settings {
}
}
pub async fn run_and_save(client: &api::Client, settings: Settings) {
let Some((evaluation, current_pool_index, evaluation_result_path)) =
run(client, &settings, true).await
/// Runs the evaluation process, saves the results, and cleans up old files.
///
/// This function performs the following steps:
/// 1. Runs the evaluation task by invoking `compute_distance_evaluation`, which provides:
/// - The evaluation results.
/// - The current period index.
/// - The file path where the results should be stored.
/// 2. Saves the evaluation results to a file in the specified directory.
/// 3. Cleans up outdated evaluation files.
pub async fn run(client: &api::Client, settings: &Settings) {
let Some((evaluation, current_period_index, evaluation_result_path)) =
compute_distance_evaluation(client, settings).await
else {
return;
};
......@@ -113,82 +159,52 @@ pub async fn run_and_save(client: &api::Client, settings: Settings) {
)
});
// Remove old results
let mut files_to_remove = Vec::new();
for entry in settings
// When a new result is written, remove old results except for the current period used by the inherent logic and the next period that was just generated.
settings
.evaluation_result_dir
.read_dir()
.unwrap_or_else(|e| {
panic!(
"Cannot read distance evaluation result directory `{0:?}`: {e:?}",
settings.evaluation_result_dir
"Cannot read distance evaluation result directory `{:?}`: {:?}",
settings.evaluation_result_dir, e
)
})
.flatten()
{
if let Ok(entry_name) = entry.file_name().into_string() {
if let Ok(entry_pool) = entry_name.parse::<isize>() {
if current_pool_index as isize - entry_pool > 3 {
files_to_remove.push(entry.path());
}
}
}
}
files_to_remove.into_iter().for_each(|f| {
std::fs::remove_file(&f)
.unwrap_or_else(move |e| warn!("Cannot remove old result file `{f:?}`: {e:?}"));
});
.filter_map(|entry| {
entry
.file_name()
.to_str()
.and_then(|name| {
name.split('-').last()?.parse::<u32>().ok().filter(|&pool| {
pool != current_period_index && pool != current_period_index + 1
})
})
.map(|_| entry.path())
})
.for_each(|path| {
std::fs::remove_file(&path)
.unwrap_or_else(|e| warn!("Cannot remove file `{:?}`: {:?}", path, e));
});
}
/// Returns `Option<(evaluation, current_pool_index, evaluation_result_path)>`
pub async fn run(
/// Evaluates distance for the current period and prepares results for storage.
///
/// This function performs the following steps:
/// 1. Prepares the evaluation context using `prepare_evaluation_context`. If the context is not
/// ready (e.g., no pending evaluations, or results already exist), it returns `None`.
/// 2. Evaluates distances for all identities in the evaluation pool.
/// 3. Returns the evaluation results, the current period index, and the path to store the results.
///
pub async fn compute_distance_evaluation(
client: &api::Client,
settings: &Settings,
handle_fs: bool,
) -> Option<(Vec<sp_runtime::Perbill>, u32, PathBuf)> {
let parent_hash = api::parent_hash(client).await;
let max_depth = api::max_referee_distance(client).await;
let (evaluation_block, current_period_index, evaluation_pool, evaluation_result_path) =
prepare_evaluation_context(client, settings).await?;
let current_pool_index = api::current_pool_index(client, parent_hash).await;
// Fetch the pending identities
let Some(evaluation_pool) = api::current_pool(client, parent_hash, current_pool_index).await
else {
info!("Nothing to do: Pool does not exist");
return None;
};
// Stop if nothing to evaluate
if evaluation_pool.evaluations.0.is_empty() {
info!("Nothing to do: Pool is empty");
return None;
}
let evaluation_result_path = settings
.evaluation_result_dir
.join((current_pool_index + 1).to_string());
if handle_fs {
// Stop if already evaluated
if evaluation_result_path
.try_exists()
.expect("Result path unavailable")
{
info!("Nothing to do: File already exists");
return None;
}
std::fs::create_dir_all(&settings.evaluation_result_dir).unwrap_or_else(|e| {
error!(
"Cannot create distance evaluation result directory `{0:?}`: {e:?}",
settings.evaluation_result_dir
);
});
}
info!("Evaluating distance for period {}", current_period_index);
info!("Evaluating distance for pool {}", current_pool_index);
let evaluation_block = api::evaluation_block(client, parent_hash).await;
let max_depth = api::max_referee_distance(client).await;
// member idty -> issued certs
let mut members = FnvHashMap::<IdtyIndex, u32>::default();
......@@ -243,9 +259,75 @@ pub async fn run(
.map(|(idty, _)| distance_rule(&received_certs, &referees, max_depth, *idty))
.collect();
Some((evaluation, current_pool_index, evaluation_result_path))
Some((evaluation, current_period_index, evaluation_result_path))
}
/// Prepares the context for the next evaluation task.
///
/// This function performs the following steps:
/// 1. Fetches the parent hash of the latest block from the API.
/// 2. Determines the current period index.
/// 3. Retrieves the evaluation pool for the current period.
/// - If the pool does not exist or is empty, it returns `None`.
/// 4. Checks if the evaluation result file for the next period already exists.
/// - If it exists, the task has already been completed, so the function returns `None`.
/// 5. Ensures the evaluation result directory is available, creating it if necessary.
/// 6. Retrieves the block number of the evaluation.
///
async fn prepare_evaluation_context(
client: &api::Client,
settings: &Settings,
) -> Option<(H256, u32, EvaluationPool, PathBuf)> {
let parent_hash = api::parent_hash(client).await;
let current_period_index = api::current_period_index(client, parent_hash).await;
// Fetch the pending identities
let Some(evaluation_pool) =
api::current_pool(client, parent_hash, current_period_index % 3).await
else {
info!("Nothing to do: Pool does not exist");
return None;
};
// Stop if nothing to evaluate
if evaluation_pool.evaluations.0.is_empty() {
info!("Nothing to do: Pool is empty");
return None;
}
// The result is saved in a file named `current_period_index + 1`.
// It will be picked up during the next period by the inherent.
let evaluation_result_path = settings
.evaluation_result_dir
.join(VERSION_PREFIX.to_owned() + &(current_period_index + 1).to_string());
// Stop if already evaluated
if evaluation_result_path
.try_exists()
.expect("Result path unavailable")
{
info!("Nothing to do: File already exists");
return None;
}
#[cfg(not(test))]
std::fs::create_dir_all(&settings.evaluation_result_dir).unwrap_or_else(|e| {
panic!(
"Cannot create distance evaluation result directory `{0:?}`: {e:?}",
settings.evaluation_result_dir
);
});
Some((
api::evaluation_block(client, parent_hash).await,
current_period_index,
evaluation_pool,
evaluation_result_path,
))
}
/// Recursively explores the certification graph to identify referees accessible within a given depth.
fn distance_rule_recursive(
received_certs: &FnvHashMap<IdtyIndex, Vec<IdtyIndex>>,
referees: &FnvHashMap<IdtyIndex, u32>,
......@@ -291,7 +373,7 @@ fn distance_rule_recursive(
}
}
/// Returns the fraction `nb_accessible_referees / nb_referees`
/// Calculates the fraction of accessible referees to total referees for a given identity.
fn distance_rule(
received_certs: &FnvHashMap<IdtyIndex, Vec<IdtyIndex>>,
referees: &FnvHashMap<IdtyIndex, u32>,
......
// Copyright 2023 Axiom-Team
// Copyright 2023-2024 Axiom-Team
//
// This file is part of Duniter-v2S.
//
......@@ -20,6 +20,10 @@ use clap::Parser;
struct Cli {
#[clap(short = 'd', long, default_value = "/tmp/duniter/chains/gdev/distance")]
evaluation_result_dir: String,
/// Number of seconds between two evaluations (oneshot if absent)
#[clap(short = 'i', long)]
interval: Option<u64>,
/// Node used for fetching state
#[clap(short = 'u', long, default_value = "ws://127.0.0.1:9944")]
rpc_url: String,
/// Log level (off, error, warn, info, debug, trace)
......@@ -36,12 +40,21 @@ async fn main() {
.init()
.unwrap();
distance_oracle::run_and_save(
&distance_oracle::api::client(cli.rpc_url.clone()).await,
distance_oracle::Settings {
evaluation_result_dir: cli.evaluation_result_dir.into(),
rpc_url: cli.rpc_url,
},
)
.await;
let client = distance_oracle::api::client(&cli.rpc_url).await;
let settings = distance_oracle::Settings {
evaluation_result_dir: cli.evaluation_result_dir.into(),
rpc_url: cli.rpc_url,
};
if let Some(duration) = cli.interval {
let mut interval = tokio::time::interval(std::time::Duration::from_secs(duration));
interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay);
loop {
distance_oracle::run(&client, &settings).await;
interval.tick().await;
}
} else {
distance_oracle::run(&client, &settings).await;
}
}
......@@ -19,7 +19,6 @@ use crate::runtime::runtime_types::{
};
use dubp_wot::{data::rusty::RustyWebOfTrust, WebOfTrust, WotId};
use sp_core::H256;
use std::collections::BTreeSet;
pub struct Client {
......@@ -28,13 +27,14 @@ pub struct Client {
}
pub type AccountId = subxt::ext::sp_runtime::AccountId32;
pub type IdtyIndex = u32;
pub type H256 = subxt::utils::H256;
pub struct EvaluationPool<AccountId: Ord, IdtyIndex> {
pub struct EvaluationPool {
pub evaluations: (Vec<(IdtyIndex, MedianAcc<Perbill>)>,),
pub evaluators: BTreeSet<AccountId>,
}
pub async fn client(_rpc_url: String) -> Client {
pub async fn client(_rpc_url: impl AsRef<str>) -> Client {
unimplemented!()
}
......@@ -46,7 +46,7 @@ pub async fn parent_hash(_client: &Client) -> H256 {
Default::default()
}
pub async fn current_pool_index(_client: &Client, _parent_hash: H256) -> u32 {
pub async fn current_period_index(_client: &Client, _parent_hash: H256) -> u32 {
0
}
......@@ -54,7 +54,7 @@ pub async fn current_pool(
client: &Client,
_parent_hash: H256,
_current_session: u32,
) -> Option<EvaluationPool<AccountId, IdtyIndex>> {
) -> Option<EvaluationPool> {
Some(EvaluationPool {
evaluations: (client
.wot
......@@ -64,7 +64,10 @@ pub async fn current_pool(
.zip(0..client.pool_len)
.map(|(wot_id, _)| {
(wot_id.0 as IdtyIndex, unsafe {
std::mem::transmute((Vec::<()>::new(), Option::<u32>::None, 0))
std::mem::transmute::<
(std::vec::Vec<()>, std::option::Option<u32>, i32),
MedianAcc<Perbill>,
>((Vec::<()>::new(), Option::<u32>::None, 0))
})
})
.collect(),),
......
......@@ -58,7 +58,7 @@ async fn test_distance_against_v1() {
client.pool_len = n;
let t_a = std::time::Instant::now();
let results = crate::run(&client, &Default::default(), false)
let results = crate::compute_distance_evaluation(&client, &Default::default())
.await
.unwrap();
println!("new time: {}", t_a.elapsed().as_millis());
......
# This is a minimal docker-compose.yml template for running a Duniter instance
# This is a minimal docker-compose.yml template for running a Duniter mirror node
# For more detailed examples, look at docker/compose folder
version: "3.5"
services:
duniter-v2s:
container_name: duniter-v2s
# choose the version of the image here
image: duniter/duniter-v2s:latest
duniter-v2s-mirror:
container_name: duniter-v2s-mirror
# the image tells which network you are connecting to
# here it is gdev network
image: duniter/duniter-v2s-gdev-800:latest
ports:
# telemetry
# prometheus telemetry to monitor resource use
- 9615:9615
# rpc
- 9933:9933
# rpc-ws
# RPC API (ws and http)
- 9944:9944
# p2p
# public p2p endpoint
- 30333:30333
environment:
DUNITER_NODE_NAME: "duniter_local"
DUNITER_CHAIN_NAME: "gdev"
volumes:
- duniter-local-data:/var/lib/duniter
distance-oracle:
container_name: distance-oracle
# choose the version of the image here
image: duniter/duniter-v2s:latest
entrypoint: docker-distance-entrypoint
environment:
ORACLE_RPC_URL: "ws://duniter-v2s:9944"
ORACLE_RESULT_DIR: "/var/lib/duniter/chains/gdev/distance/"
ORACLE_EXECUTION_INTERVAL: "1800"
ORACLE_MAX_DEPTH: "5"
ORACLE_LOG_LEVEL: "info"
# read https://duniter.org/wiki/duniter-v2/configure-docker/
# to configure these
DUNITER_NODE_NAME: duniter_local
DUNITER_CHAIN_NAME: gdev
DUNITER_PUBLIC_ADDR: /dns/your.domain.name/tcp/30333
DUNITER_LISTEN_ADDR: /ip4/0.0.0.0/tcp/30333
volumes:
- duniter-local-data:/var/lib/duniter
......
# Workaround for https://github.com/containers/buildah/issues/4742
FROM debian:bullseye-slim as target
FROM debian:bullseye-slim AS target
# ------------------------------------------------------------------------------
# Build Stage
......@@ -7,14 +7,19 @@ FROM debian:bullseye-slim as target
# When building for a foreign arch, use cross-compilation
# https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/
FROM --platform=$BUILDPLATFORM rust:1-bullseye as build
FROM --platform=$BUILDPLATFORM rust:1-bullseye AS build
ARG BUILDPLATFORM
ARG TARGETPLATFORM
# Debug
RUN echo "BUILDPLATFORM = $BUILDPLATFORM"
RUN echo "TARGETPLATFORM = $TARGETPLATFORM"
# We need the target arch triplet in both Debian and rust flavor
RUN echo "DEBIAN_ARCH_TRIPLET='$(dpkg-architecture -A${TARGETPLATFORM#linux/} -qDEB_TARGET_MULTIARCH)'" >>/root/dynenv
RUN . /root/dynenv && \
echo "RUST_ARCH_TRIPLET='$(echo "$DEBIAN_ARCH_TRIPLET" | sed -E 's/-linux-/-unknown&/')'" >>/root/dynenv
RUN cat /root/dynenv
WORKDIR /root
......@@ -48,8 +53,8 @@ ARG chain="gdev"
RUN set -x && \
cat /root/dynenv && \
. /root/dynenv && \
cargo build --locked $CARGO_OPTIONS --no-default-features $BENCH_OPTIONS --features $chain --target "$RUST_ARCH_TRIPLET" && \
cargo build --locked $CARGO_OPTIONS --target "$RUST_ARCH_TRIPLET" --package distance-oracle && \
cargo build -Zgit=shallow-deps --locked $CARGO_OPTIONS --no-default-features $BENCH_OPTIONS --features $chain --target "$RUST_ARCH_TRIPLET" && \
cargo build -Zgit=shallow-deps --locked $CARGO_OPTIONS --target "$RUST_ARCH_TRIPLET" --package distance-oracle && \
mkdir -p build && \
mv target/$RUST_ARCH_TRIPLET/$TARGET_FOLDER/duniter build/ && \
mv target/$RUST_ARCH_TRIPLET/$TARGET_FOLDER/distance-oracle build/
......@@ -58,7 +63,7 @@ RUN set -x && \
ARG cucumber=0
RUN if [ "$cucumber" != 0 ] && [ "$TARGETPLATFORM" = "$BUILDPLATFORM" ]; then \
cargo ta && \
cargo test --workspace --exclude duniter-end2end-tests --exclude duniter-live-tests --features=runtime-benchmarks,constant-fees \
cargo test -Zgit=shallow-deps --workspace --exclude duniter-end2end-tests --exclude duniter-live-tests --features=runtime-benchmarks,constant-fees \
cd target/debug/deps/ && \
rm cucumber_tests-*.d && \
mv cucumber_tests* ../../../build/duniter-cucumber; \
......@@ -83,13 +88,17 @@ RUN apt-get clean && rm -rf /var/lib/apt/lists/*
RUN adduser --home /var/lib/duniter duniter
# Configuration
# rpc, rpc-ws, p2p, telemetry
EXPOSE 9933 9944 30333 9615
# rpc, p2p, telemetry
EXPOSE 9944 30333 9615
VOLUME /var/lib/duniter
ENTRYPOINT ["docker-entrypoint"]
USER duniter
# Intall
COPY --from=build /root/build /usr/local/bin/
COPY --from=build /root/dynenv /var/lib/duniter
COPY docker/docker-entrypoint /usr/local/bin/
COPY docker/docker-distance-entrypoint /usr/local/bin/
# Debug
RUN cat /var/lib/duniter/dynenv
......@@ -16,9 +16,7 @@ services:
ports:
# Prometheus endpoint
- 9615:9615
# rpc via http
- 9933:9933
# rpc via websocket
# rpc
- 9944:9944
# p2p
- 30333:30333
......@@ -60,20 +58,24 @@ volumes:
## Environment variables
| Name | Description | Default |
|------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------|
| ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- |
| `DUNITER_NODE_NAME` | The node name. This name will appear on the Substrate telemetry server when telemetry is enabled. | Random name |
| `DUNITER_CHAIN_NAME` | The currency to process. "gdev" uses the embeded chainspec. A path allows to use a local json raw chainspec. | `dev` (development mode) |
| `DUNITER_PUBLIC_ADDR` | The libp2p public address base. See [libp2p documentation](https://docs.libp2p.io/concepts/fundamentals/addressing/). This variable is useful when the node is behind a reverse proxy with its ports not directly exposed.<br>Note: the `p2p/<peer_id>` part of the address shouldn't be set in this variable. It is automatically added by Duniter. | duniter-v2s guesses one from the node's IPv4 address. |
| `DUNITER_LISTEN_ADDR` | The libp2p listen address. See [libp2p documentation](https://docs.libp2p.io/concepts/fundamentals/addressing/). This variable is useful when running a validator node behind a reverse proxy, to force the P2P end point in websocket mode with:<br> `DUNITER_LISTEN_ADDR=/ip4/0.0.0.0/tcp/30333/ws` | Non validator node: `/ip4/0.0.0.0/tcp/30333/ws`<br>Validator node: `/ip4/0.0.0.0/tcp/30333` |
| `DUNITER_LISTEN_ADDR` | The libp2p listen address. See [libp2p documentation](https://docs.libp2p.io/concepts/fundamentals/addressing/). This variable is useful when running a validator node behind a reverse proxy, to force the P2P end point in websocket mode with:<br> `DUNITER_LISTEN_ADDR=/ip4/0.0.0.0/tcp/30333/ws` | Non validator node: `/ip4/0.0.0.0/tcp/30333/ws`<br>Validator node: `/ip4/0.0.0.0/tcp/30333` |
| `DUNITER_RPC_CORS` | Value of the polkadot `--rpc-cors` option. | `all` |
| `DUNITER_VALIDATOR` | Boolean (`true` / `false`) to run the node in validator mode. Configure the polkadot options `--validator --rpc-methods Unsafe`. | `false` |
| `DUNITER_DISABLE_PROMETHEUS` | Boolean to disable the Prometheus endpoint on port 9615. | `false` |
| `DUNITER_DISABLE_TELEMETRY` | Boolean to disable connecting to the Substrate telemetry server. | `false` |
| `DUNITER_PRUNING_PROFILE` | * `default`<br> * `archive`: keep all blocks and state blocks<br> * `light`: keep only last 256 state blocks and last 14400 blocks (one day duration) | `default` |
| `DUNITER_PRUNING_PROFILE` | _ `default`<br> _ `archive`: keep all blocks and state blocks<br> \* `light`: keep only last 256 state blocks and last 14400 blocks (one day duration) | `default` |
| `DUNITER_PUBLIC_RPC` | The public RPC endpoint to gossip on the network and make available in the apps. | None |
| `DUNITER_PUBLIC_SQUID` | The public Squid graphql endpoint to gossip on the network and make available in the apps. | None |
| `DUNITER_PUBLIC_ENDPOINTS` | Path to a JSON file containing public endpoints to gossip on the network. The file should use the following format:<br>```{"endpoints": [ { "protocol": "rpc", "address": "wss://gdev.example.com" }, { "protocol": "squid", "address": "gdev.example.com/graphql/v1" }]}``` | None |
## Other Duniter options
You can pass any other option to Duniter using the `command` docker-compose element:
```
command:
# workaround for substrate issue #12073
......@@ -92,6 +94,7 @@ docker compose up -d
## Running duniter subcommands or custom set of options
To run duniter from the command line without the default configuration detailed in the "Environment variables" section use `--` as the first argument. For example:
```
$ docker run --rm duniter/duniter-v2s-gdev:latest -- key generate
$ docker run --rm duniter/duniter-v2s-gdev:latest -- --chain gdev ...
......
FROM paritytech/ci-linux:production
# Set the working directory
WORKDIR /app/
# Copy the toolchain
COPY rust-toolchain.toml ./
# Install toolchain, substrate and cargo-deb with cargo cache
RUN --mount=type=cache,target=/root/.cargo \
cargo install cargo-deb
# Create a dummy project to cache dependencies
COPY Cargo.toml .
COPY rust-toolchain.toml ./
RUN --mount=type=cache,target=/app/target \
--mount=type=cache,target=/root/.cargo/registry \
mkdir src && \
sed -i '/git = \|version = /!d' Cargo.toml && \
sed -i 's/false/true/' Cargo.toml && \
sed -i '1s/^/\[package\]\nname\=\"Dummy\"\n\[dependencies\]\n/' Cargo.toml && \
echo "fn main() {}" > src/main.rs && \
cargo build -Zgit=shallow-deps --release && \
rm -rf src Cargo.lock Cargo.toml
# Copy the entire project
COPY . .
# Build the project and create Debian packages
RUN --mount=type=cache,target=/app/target \
--mount=type=cache,target=/root/.cargo/registry \
cargo build -Zgit=shallow-deps --release && \
cargo deb --no-build -p duniter && \
cp -r ./target/debian/ ./
# Clean up unnecessary files to reduce image size
RUN rm -rf /app/target/release /root/.cargo/registry