Proof-of-work difficulty change is too big
Currently, PoW is computed by making the proof starting by a given number of zeros. For example 0000F73E2C2D639B4A90CA183A38581EBBE40F12
. But this technique implies each time the level need to evolve difficulty is either:
- multiplied by
16
(because of hexadecimal base) - or divided by
16
Which means, given an average computing time target of 10min/block: once the difficulty is too easy (the block is computed in under 10 / 4 = 2min30
), then the difficulty levels up by a 16
factor (due to hexadecimal notation), which gives 2.5 * 16 = 40 min
: the difficulty has become 4 time the average targeted computing time.
That is a big validation time change.
Instead, proof-of-work could use another defintion like: proof-of-work must start with NB_ZEROS
zeros, followed by:
- a character between
[0,F]
if the difficulty depends only on leading zeros - a character between
[0,7]
if the difficulty has to be 2 times harder - a character between
[0,3]
if the difficulty has to be 4 times harder - a character between
[0,1]
if the difficulty has to be 8 times harder
This definition of PoW allows to change from difficulty range of ̀16
given by hexadecimal system (difficulty bounds are SQRT(16) = 4
times less or SQRT(16) = 4
times more than average target) to a difficulty range of 2
(difficulty bounds are SQRT(2)
times less or SQRT(2)
times more than average target).
So, given our last example: given an average computing time target of 10min/block: once the difficulty is too easy (the block is computed in under 10 / sqrt(2) = 7min
), then the difficulty levels up by a 2
factor which gives 7min * 2 = 14 min
: the difficulty has become sqrt(2)
times the average targeted computing time.
That would be more acceptable.