-
* refactor functions naming for pallet-membership * refactor functions naming for pallet-certification * fix tests with runtime-benchmark feature * fix compilation with runtime-benchmarks feature * update documentation * remove handlers weights for pallet_membership * remove handlers weights for pallet_identity * remove handlers weights for pallet_certification * add best practices for benchmarking * add hooks benchmark pallet_universal_dividend * add missing benchmarks pallet_universal_dividend * update weights * fix pallet_provide_randomness on_initialize weight * fix pallet_identity weight::zero * fix pallet_membership weight::zero * fix pallet_certification weight::zero * fix pallet_authority_members weight::zero * fix pallet_identity weight::zero * fix pallet_membership weight::zero
* refactor functions naming for pallet-membership * refactor functions naming for pallet-certification * fix tests with runtime-benchmark feature * fix compilation with runtime-benchmarks feature * update documentation * remove handlers weights for pallet_membership * remove handlers weights for pallet_identity * remove handlers weights for pallet_certification * add best practices for benchmarking * add hooks benchmark pallet_universal_dividend * add missing benchmarks pallet_universal_dividend * update weights * fix pallet_provide_randomness on_initialize weight * fix pallet_identity weight::zero * fix pallet_membership weight::zero * fix pallet_certification weight::zero * fix pallet_authority_members weight::zero * fix pallet_identity weight::zero * fix pallet_membership weight::zero
Weights benchmarking
What is the reference machine?
For now (09/2022), it's a Raspberry Pi 4 Model B - 4GB
with an SSD connected via USB3.
To cross-compile the benchmarks binary for armv7:
./scripts/cross-build-arm.sh --features runtime-benchmarks
The cross compiled binary is generated here: target/armv7-unknown-linux-gnueabihf/release/duniter
How to benchmarks weights of a Call/Hook/Pallet
-
Create the benchmarking tests. See commit f5f2ae96 for a complete real example.
-
Run the benchmark test on your local machine:
cargo test -p <pallet> --features runtime-benchmarks
- If the benchmark tests compiles and pass, compile the binary with benchmarks on your local machine:
cargo build --release --features runtime-benchmarks
- Run the benchmarks on your local machine (to test if it work with a real runtime). See 0d1232cd for a complete real example. The command is:
duniter benchmark pallet --chain=CHAINSPEC --steps=50 --repeat=20 --pallet=<pallet> --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --header=./file_header.txt --output=./runtime/common/src/weights/
-
Use the generated file content to create the
WeightInfo
trait and the()
dummy implementation inpallets/<pallet>/src/weights.rs
. Then use theWeightInfo
trait in the real code of the pallet. See 62dcc17f for a complete real example. -
Redo steps
3.
and4.
on the reference machine. -
Use the
runtime/common/src/weights/pallet_<pallet>.rs
generated on the reference machine in the runtimes configuration. See af62a3b9 for a complete real example.
Note 1: Use relevant chainspec for the benchmarks in place of CHAINSPEC
, for example --chain=dev
.
Note 2: If the reference machine does not support wasmtime, you should replace --wasm-execution=compiled
by --wasm-execution=interpreted-i-know-what-i-do
.
Generate base block benchmarking
- Build binary for reference machine and copy it on reference machine.
- Run base block benchmarks command:
duniter benchmark overhead --chain=dev --execution=wasm --wasm-execution=compiled --weight-path=./runtime/common/src/weights/ --warmup=10 --repeat=100
- Commit changes and open an MR.
Generate storage benchmarking
- Build binary for reference machine and copy it on reference machine.
- Copy a DB on reference machine (on ssd), example:
scp -r -P 37015 tmp/t1 pi@192.168.1.188:/mnt/ssd1/duniter-v2s/
- Run storage benchmarks command, example:
duniter benchmark storage -d=/mnt/ssd1/duniter-v2s/t1 --chain=gdev --mul=2 --weight-path=. --state-version=1
- Copy the generated file
paritydb_weights.rs
in the codebase in folderruntime/common/src/weights/
. - Commit changes and open an MR.
How to Write Benchmarks
Calls
Ensure that any extrinsic call is benchmarked using the most computationally intensive path, i.e., the worst-case scenario.
Hooks
Benchmark each hook to determine the weight consumed by it; hence, it is essential to benchmark all possible paths.
Handlers and Internal Functions
When designing handlers and internal functions, it is advisable to avoid having them return weight for the following reasons:
- Simplified Benchmarking: Writing benchmarks for hooks or calls where handlers and internal functions are utilized becomes more straightforward.
- Reduced Benchmarking Complexity: By directly measuring execution and overhead in a single pass, the number of benchmarks is minimized.
- Enhanced Readability: Understanding that weight accounting occurs at the outermost level improves the overall readability of the code.
One notable exception is the internal functions called in hooks like on_idle
or on_initialize
that can be easier to benchmark separately when the hook contains numerous branching.