nano-node icon indicating copy to clipboard operation
nano-node copied to clipboard

Idea: combine Buckets of 2^0 (1) - 2^63 raw (antispam)

Open My1 opened this issue 2 years ago • 10 comments

Summary

Basically all nano amounts of raw from 2^0 (aka 1) raw up to at least 2^63 could be combined into one big bucket which gets treated equal to all the other buckets.

The Bucket for zero raw is not scope of this as for example sending everything is obviously a very normal case.

What problem would be solved by this feature?

basically spam. because the buckets are accessed in a round robin manner the lower 64 buckets (excluding zero raw) buckets have the same allocation in the likelyhood as the upper 63 buckets and the zero bucket, which means a spammer could use the low buckets easily to keep transactions in, especially as LRU isnt global, but rather per bucket. as well as there likely being close to no "real" usage in there in the first place this is a way to get around the LRU restrictions and always occupy half the transaction space. furthermore if a sort of limited mempool will be implemented, (which I would assume to be per bucket) more space could be allocated to practical amounts of Nano to be transferred.

Are there any previous requests for this feature?

aside from me throwing the idea in discord last year and sending it out on reddit as a comment recently, I am not aware of any.

Do you have a suggested solution?

basically one could instead of the current 129 buckets of 0 and 2^(0-127) raw, one could have 65 buckets which are:

  1. 0 raw (emptying your wallet
  2. a combined bucket for 2^0 (1) to 2^63 raw (which has amounts up to 0.000000000018446744073709551615 XNO inside 3-65) 63 individual buckets for 2^64 to 2^127 raw each (basically the same buckets we use now)

that way the round robin only gets 1/65 of the allocation for those small transactions, and to have these values of raw have any practical value outside of spam and other unintended usages (encoding messages and whatnot) 1 XNO would have to be around 54 Million Dollars when considering "practical" as one tenth of a US Cent (see ref 1) (I just thought of that number arbitrarily, not sure if it's an accurate representation)

the bucket allocation could be done possibly using this semi simple if/else block which basically only uses if/else, (not) equal, and a bitshift, which hopefully means this is a pretty fast thing, and should not impact performance of bucket sorting. I am using a PHP source code block to represent it because it's the language I am most familiar with, which hopefully helps (also this ignores things like integer size limits and whatnot

$raw_amount=42;
if($raw_amount === 0) { //zero raw
  //move to zero bucket
}
elseif($raw_amount >> 64 === 0) { //upper 64 bits are zero, so <2^64 (0 is already excluded)
  //put in combined bucket
}
else { // >=2^64
  //normal bucket sorting
}

Refs: [1] how many Dollars would we need for 1 XNO to have 2^64-1 raw to be 0,1 US Cent or larger https://www.wolframalpha.com/input?i=0.000000000018446744073709551615%5E%28-1%29%2F1000

If this feature is approved, would you be willing to submit a pull request with the solution?

I am willing to collaborate

Possible solution

I honestly know way too little about the node code to actually do anything but if there's a way I could help I can try.

Supporting files

No response

My1 avatar Apr 26 '22 11:04 My1

Thanks, this is being considered for future release. There is good discussion in a forum topic from last year which includes some suggestions for better shaping bucket distributions: https://forum.nano.org/t/election-scheduler-and-prioritization-revamp/1837/26

zhyatt avatar Apr 26 '22 16:04 zhyatt

sure, my idea is maybe not the most sophisticated I basically thought of being simple and hopefully quick, easy to understand and hopefully futureproof enough that this would need a change every time nano gets worth a bit more or less

My1 avatar Apr 26 '22 16:04 My1

Wouldn't it make sense to keep all the buckets, but replace the scheduler with one based on Weighted Round Robin? This achieved similar results, if the buckets from 2^0 (1) to 2^63 raw were assigned 1/64 of the weight of the other queues. The number of buckets wouldn't get halved, though. It allowed for more granular adjustments of the scheduler (parametric weight distribution of the buckets) should the need arise.

zergtoshi avatar Apr 27 '22 07:04 zergtoshi

that sounds possible too, I wasnt aware that was even a thing and just went with the most simple way this could have worked.

My1 avatar Apr 27 '22 14:04 My1

Oh yes, it is a thing and it's used for exactly what it could be used here: dealing with classes differently according to a scheme. It has been used for managing traffic/applying QoS schemes in computer networks for a very long time (switches, routers, load balancers).

zergtoshi avatar Apr 28 '22 09:04 zergtoshi

sounds cool. not sure how much performance that takes especially at the scale of nano but if it isnt much that might be really great too

My1 avatar Apr 28 '22 12:04 My1

I don't see a difference performance wise between going through classes/buckets in a round robin fashion and a weighted round robin fashion. Buckets get picked one at a time. The order should play no role and essentially the weighting only affects the order. Then again I have no clue about programming

zergtoshi avatar Apr 28 '22 13:04 zergtoshi

well with the weighting the oder (in theory at least) might be calculated, and while for most normal uses it may change not anything really Nano is supposed to scale big, and there I really dont know how it goes

My1 avatar Apr 28 '22 13:04 My1

With a static WRR you can create a static sequence of buckets to go through as well. Examples:

  • Round robin and 5 buckets leads to the sequence 1, 2, 3, 4, 5 that gets repeated over and over
  • Weighted round robin and 5 buckets, buckets 1-4 having a weight of 2 and 5 having a weight of 1 (the not so important bucket) leads to the sequence 1, 2, 3, 4, 1, 2, 3, 4, 5 that gets repeated over and over

As long as the weight stays the same, the sequence stays the same. It's nothing that needs to be computed continuously.

Round robin can be seen as an edge case of WRR with a weight of 1 for all buckets.

zergtoshi avatar Apr 29 '22 07:04 zergtoshi

okay that sounds reasonable.

My1 avatar Apr 29 '22 08:04 My1

I think this issue is effectively addressed by https://github.com/nanocurrency/nano-node/pull/3980 in V24. 62 buckets now:

This change approximates a normal distribution around 2^88 raw (Ӿ0.0003) through 2^120 raw (Ӿ1,300,000) with amounts out of this range getting a single bucket and amounts within this range getting an increased amount of prioritization.

Code snippet:

/**
 * Prioritization constructor, construct a container containing approximately 'maximum' number of blocks.
 * @param maximum number of blocks that this container can hold, this is a soft and approximate limit.
 */
nano::prioritization::prioritization (uint64_t maximum) :
	maximum{ maximum }
{
	auto build_region = [this] (uint128_t const & begin, uint128_t const & end, size_t count) {
		auto width = (end - begin) / count;
		for (auto i = 0; i < count; ++i)
		{
			minimums.push_back (begin + i * width);
		}
	};
	minimums.push_back (uint128_t{ 0 });
	build_region (uint128_t{ 1 } << 88, uint128_t{ 1 } << 92, 2);
	build_region (uint128_t{ 1 } << 92, uint128_t{ 1 } << 96, 4);
	build_region (uint128_t{ 1 } << 96, uint128_t{ 1 } << 100, 8);
	build_region (uint128_t{ 1 } << 100, uint128_t{ 1 } << 104, 16);
	build_region (uint128_t{ 1 } << 104, uint128_t{ 1 } << 108, 16);
	build_region (uint128_t{ 1 } << 108, uint128_t{ 1 } << 112, 8);
	build_region (uint128_t{ 1 } << 112, uint128_t{ 1 } << 116, 4);
	build_region (uint128_t{ 1 } << 116, uint128_t{ 1 } << 120, 2);
	minimums.push_back (uint128_t{ 1 } << 120);
	buckets.resize (minimums.size ());
	populate_schedule ();
	current = schedule.begin ();
}

qwahzi avatar Dec 29 '22 14:12 qwahzi

Moving to V24 milestone and closing per discussion with @My1 on Discord.

qwahzi avatar Jan 06 '23 19:01 qwahzi