joinmarket-clientserver icon indicating copy to clipboard operation
joinmarket-clientserver copied to clipboard

Reversion to original fidelity bond settings due to market adoption of bonds

Open jmmadn opened this issue 1 year ago • 16 comments

When #1247 was proposed, the number of fidelity bonds in the market was not very large and most of the makers didn't have fidelity bonds.

A recent check of this with ob-watcher shows 82 offers with fidelity bonds and 85 offers without - almost a 50/50 split. This demonstrates that the market is adopting fidelity bonds quickly.

Now that we will soon have more offers with bonds than without, should we consider switching back to the original exponent of 2 to reduce the likelihood of Sybil counterparties?

jmmadn avatar Aug 01 '22 06:08 jmmadn

In #1247 two changes were proposed - increasing default bondless maker allowance and lowering exponent. First one was because of not enough makers with fidelity bonds, but wasn't merged at the end. So there is nothing to change back now. Lowering exponent is different, it was to give higher weight for lower value bonds, not because of low fideliy bond adoption. Also need to remember these are only defaults, in JM taker is always who decides to which makers to do coinjoin with and anyone is free to change these settings (or even manually pick makers, see -P setting of sendpayment.py).

kristapsk avatar Aug 01 '22 17:08 kristapsk

@kristapsk The lowering of the default exponent, as widely discussed in #1247, also increased the likelihood of Sybil counterparties. This tradeoff was accepted because the number of fidelity bonds in the market was not very large and most of the makers didn't have fidelity bonds.

However, the market conditions have now changed and there are an increasing percentage of makers with bonds. I think it justifies a reversion of the defaults, as the original defaults were designed to reduce the likelihood of Sybil attacks. Agree that individual takers can modify the defaults themselves, but that does not provide protection unless either everyone adjusts the defaults, or simpler - reverting the default value when a joinmarket.cfg file is generated fresh.

jmmadn avatar Aug 02 '22 03:08 jmmadn

I agree with @kristapsk 's summary.

The lowering of the default exponent, as widely discussed in @1247, also increased the likelihood of Sybil counterparties.

Well it doesn't increase a Sybil risk for anyone who chooses to change it to whatever they think is best.

It's very hard to do an objective analysis of this, but, I think there might be a way to at least use some 'theory' to analyze it. Consider modelling a wealth distribution (Pareto or whatever); you might be able to build a mathematical model to try to establish a balance between two conflicting dynamics:

  • High exponent -> higher probabilities at top of distribution -> smaller number ever get chosen, meaning always choosing the same top N parties becomes very common.
  • Low exponent (close to 1) -> easier to split funds to Sybil attack

I don't think this is a simple matter, at all. I think the original analysis that @chris-belcher did, ~~didn't consider the first of these two (correct me if I'm wrong)~~ edit: doh, of course it did, that's exactly the main thing his 'financial mathematics' gist actually did :).

Even in the absence of any attack (as alluded to by the first bullet point), it's an interesting discussion as to whether it's OK to keep joining with the same counterparties (genuine question; it can even be better to keep joining with the same counterparties, it's, again, not simple). That's what is different here compared to just asking the question "how well defended are we against an attacker taking the top positions"? It's also "are we OK with always doing joins with the bots at the top and only them?"

(The fact that we have a bondless makers allowance helps, probably, by mixing it up a bit. But a minor thing really.)

(sorry I edited this quite a bit from the first version, as I got something quite confused originally)

AdamISZ avatar Aug 02 '22 10:08 AdamISZ

@jmmadn do you have any further thoughts on this?

Anyone else?

AdamISZ avatar Sep 11 '22 10:09 AdamISZ

Well it doesn't increase a Sybil risk for anyone who chooses to change it to whatever they think is best.

I think we've talked about defaults in the past and helping protect everyone through them versus letting them choose with many not knowing the risk. I think the former is better in this case. If we revert, this would lower Sybil risk and they can still individually choose if they want to relax the standards by setting the exponent to a lower value themselves.

Even in the absence of any attack (as alluded to by the first bullet point), it's an interesting discussion as to whether it's OK to keep joining with the same counterparties (genuine question; it can even be better to keep joining with the same counterparties, it's, again, not simple).

I don't think it would be the same counterparties now. Perhaps months ago, when fidelity bonds were initially implemented (there was very little selection). Now there is a rich ecosystem of them, with lots of diversity.

As the change in the exponent was seemingly made for market conditions and to address fidelity bonds not fully 'arriving' yet, my recommendation would be to revert as that the market has 'caught up'.

After the exponent change to 1.3, I have noticed the issue of multiple makers with small bonds flooding the marketplace when there were not many of of them at exponent 2. It's hard to say that these makers are controlled by the same entity, but there are minor characteristics that seem to suggest they are. This was one of the issues we saw pre-fidelity bonds, and having a lower exponent means there must still be incentive for them to add a small bond to each maker they control and carry on.

jmmadn avatar Sep 11 '22 16:09 jmmadn

IMHO we shouldn't keep meddling with this value.

I think the former is better in this case

Since clearly there's no provable best, the same argument can be made either way. Given we just changed it, I'd much prefer keeping the current status quo unless some strong case can be made for one solution over the other.

As the change in the exponent was seemingly made for market conditions and to address fidelity bonds not fully 'arriving' yet.

The exponent change seems to be more than just that, as mentioned by Belcher, Adam, and kristapsk

After the exponent change to 1.3, I have noticed the issue of multiple makers with small bonds flooding the marketplace when there were not many of of them at exponent 2

This is entirely circumstantial tho, there are plenty of other possible explanations. This claim in particular should be verifiable, right? A simulation can be made to calculate what's the chance of those millions (or whatever) of small bonds.

PulpCattel avatar Sep 11 '22 17:09 PulpCattel

IMHO we shouldn't keep meddling with this value.

Agreed, it shouldn't have been changed in the first place, so let's revert it.

Since clearly there's no provable best, the same argument can be made either way. Given we just changed it, I'd much prefer keeping the current status quo unless some strong case can be made for one solution over the other.

I find 'since clearly' statements assumptive.

The 'let's not change it because we recently changed it' argument shouldn't be a barrier to course corrections or adjustments. And in my opinion there is already a strong case made to revert.

This is entirely circumstantial tho, there are plenty of other possible explanations. This claim in particular should be verifiable, right? A simulation can be made to calculate what's the chance of those millions (or whatever) of small bonds.

A higher default exponent reduces the incentive of multiple small makers being controlled by the same entity. There have already been simulations, studies and calculations done as part of the fidelity bond implementation. The values originally set were to reduce the likelihood of this scenario. The value was adjusted to account for the immaturity of the fidelity bond market. Now that it has matured, I don't see any reason not to revert, as it has additional benefits in reducing Sybil risk.

I think we had it right the first time - the market just wasn't established at that time. If we had adjusted the bondless makers exponent similarly for the same reasons, I would have recommended reverting this value as well. Reverting the defaults protect new makers not aware of the Sybil risks, and if knowledgeable makers knowing the risks want to adjust the values, they have the option of doing so.

jmmadn avatar Sep 11 '22 18:09 jmmadn

I find 'since clearly' statements assumptive.

It is, could be clarified as "since clearly there is no provable best yet". You haven't brought anything new in favor of it, the same arguments made in #1247 against the centralization risk are still valid today (and they'll probably always be?).

And in my opinion there is already a strong case made to revert.

Yeah but other people clearly had the opposite opinion. How can we prove one of the party wrong?

There have already been simulations, studies and calculations done as part of the fidelity bond implementation. The values originally set were to reduce the likelihood of this scenario.

@chris-belcher who made most of the simulations, made clear the value 2 is not special. And he said 1.2 or 1.3 still offer good enough protection (which makes intuitive sense). Can you disprove this claim in a verifiable way? A counter simulation would be nice.

The value was adjusted to account for the immaturity of the fidelity bond market.

You keep saying that but I don't see anybody else agreeing with that.

PulpCattel avatar Sep 11 '22 18:09 PulpCattel

Yeah but other people clearly had the opposite opinion. How can we prove one of the party wrong?

Because it's not about proving anyone wrong. One approach protects new makers who are not knowledgeable from Sybil risk in a better way. The centralization risk has been addressed naturally with the market maturing.

You keep saying that but I don't see anybody else agreeing with that.

Excerpts from the original post in #1247:

But the number of fidelity bonds in the market thus far has not been very large. Combined with the below (the exponent) this has meant too much concentration of joins amongst a small group.

This direction will of course reduce the quantitatively assessed 'defence against Sybiling' for now. Based on a lot of conversation and looking at the market, I think that it makes sense.

The original reasons for adjusting the values were due to looking at the state of the market (then), the lack of fidelity bond adoption, and to prevent concentration of joins. Now that the market has matured and there is now widespread use of fidelity bonds, we should revert because of the original purpose of fidelity bonds - to reduce Sybil risk.

As for Chris' statement that he considers 1.2 or 1.3 'adequate', that was made months ago. Now that the market has matured, it makes sense to revert the value.

I don't see any reasons to keep this value as-is. The arguments of not changing it back because it was recently changed or because it was considered adequate months ago don't really hold water for me.

jmmadn avatar Sep 11 '22 19:09 jmmadn

Because it's not about proving anyone wrong.

How so? You are making a claim that 2.0 is better than 1.3 overall as default (I could also ask why not 1.9? Or 1.8? or 2.5? or 1.5?), someone is making the opposite claim (the sybil resistance loss is not much compared to the decentralization win). Clearly we can't all be right at the same time.

The centralization risk has been addressed naturally with the market maturing.

This can be proved either right or wrong. From @AdamISZ comment above:

High exponent -> higher probabilities at top of distribution -> smaller number ever get chosen, meaning always choosing the same top N parties becomes very common.

This seems quite indepedent of market maturity or number of bonds, we can have 1 million bonds but the takers will pick consistently the top 20. Giving some more breathing room to the other bonds in the spectrum seems reasonable.

Excerpts from the original post in https://github.com/JoinMarket-Org/joinmarket-clientserver/issues/1247:

That PR included the _DEFAULT_BONDLESS_MAKERS_ALLOWANCE change, so his initial argument (before all the conversation) was keeping that in mind. @AdamISZ can speak for himself, but I read it as the two being separate but in combination being even worse. If we have enough bonds now to make point 1 of that issue irrelevant, point 2 still remains.

As for Chris' statement that he considers 1.2 or 1.3 'adequate', that was made months ago

We can't really speak for him, but I don't see his argument being at all related to market maturity. I don't have anything against 2.0, or 1.5 or any other non-extreme number, but it seems reasonable to require more evidence before changing that value again. I mean, Belcher literally said:

FWIW, speaking historically the reason I chose an exponent of 2 was because I forgot the possibility of fractional exponents. So in my head the choice was only 1 or 2 (or natural numbers higher than two which are even worse)

I don't see why 2.0 should be the gold standard.

The arguments of not changing it back because it was recently changed or because it was considered adequate months ago don't really hold water for me.

This discussion has made clear this is the main point of the debate. I don't think this is the argument, rather as expressed above this is a more general and complicate issue about the trade-off between sybil resistance and (de)centralization.

PulpCattel avatar Sep 11 '22 19:09 PulpCattel

Clearly we can't all be right at the same time.

This can be proved either right or wrong.

I don't subscribe to this - it seems very binary - 'for me to be right then you have to be wrong'. I think you make good points and I see your point of view. It's not all black and white.

As to the whole 'what is the best value' discussion, that can be up to the individual maker/taker to set. If you want to keep it at 1.3 or whatever, great, you can do that. Just for the newer takers/makers that are not knowledgeable about the Sybil risk, it makes sense to revert back to the original values intended to better protect them against that risk.

I'm not particularly tied to 2 or 0.125, but these were the original values and I'm fairly certain the exponent adjustment to 1.3 was a transitory/temporary change until the market had caught up.

This seems quite indepedent of market maturity or number of bonds, we can have 1 million bonds but the takers will pick consistently the top 20. Giving some more breathing room to the other bonds in the spectrum seems reasonable.

It made sense to give that 'breathing room' when we had a low number of fidelity bonds, but I don't see the need for that now. With a million bonds (and even with a mature market), there are no 'top 20' that would consistently get joins over others. It's not a have/have-not situation - it would be more of a equitable distribution of joins because of the large selection the system could choose from. There should always be a continuing incentive for makers controlled by the same entity to pool into one maker. Yes, we can't tell for sure that multiple makers with these small bonds are controlled by the same entity, but it sure seems like it to me, and even if they weren't, a reversion back to a stronger exponent will dissuade this.

This discussion has made clear this is the main point of the debate. I don't think this is the argument, rather as expressed above this is a more general and complicate issue about the trade-off between sybil resistance and (de)centralization.

These statements were made earlier as reasons I didn't think were relevant so I wanted to call them out moving forward so we didn't muddy the waters. Happy to focus on just Sybil risk versus centralization. In my opinion centralization is no longer a concern with the familiarity and adoption of fidelity bonds.

jmmadn avatar Sep 11 '22 20:09 jmmadn

I don't subscribe to this - it seems very binary - 'for me to be right then you have to be wrong'.

If it's not black and white, as in one solution is not clearly better than the other, why change it? If you are right that the sybil resistance gain warrants this change, and that the centralization risk is naturally solved (or not a problem to begin with) even with such exponent, then it's black and white no? 2.0 (or equivalent correspondent value) would be clearly better and we should change to it. If we can't find a satisfactory, clearly better default value then I don't like changing it again.

I'm fairly certain the exponent adjustment to 1.3 was a transitory/temporary change until the market had caught up.

To be clear, if this turns out not to be the case, would you still go with 2.0, or maybe a different number? Or to ask differently, if 1.3 was the default from the start, would you still want to change it?

it would be more of a equitable distribution of joins because of the large selection the system could choose from.

If the top ones are bigger enough than the others, then potentially the large selection would be ignored and they would get an arguably disproportioned advantage. This might even be a kinda-good thing sometimes, as Adam mentioned, hard to say really. I'd love to see this simulated under different settings and market conditions. This would allow a much more informed decision, if one is at all possible.

PulpCattel avatar Sep 11 '22 22:09 PulpCattel

If it's not black and white, as in one solution is not clearly better than the other, why change it?

Already mentioned - the change was made because of the lack of fidelity bonds. I understood why the change was made then, and it made sense as a temporary change that would be reverted once the market had caught up. The original analysis that went into the fidelity bonds had calculated these initial values to prevent Sybil risk.

If you are right that the sybil resistance gain warrants this change, and that the centralization risk is naturally solved (or not a problem to begin with) even with such exponent, then it's black and white no? 2.0 (or equivalent correspondent value) would be clearly better and we should change to it.

It seems pretty clear to me - but that's why we put this up for discussion. There may be viewpoints that others can chime in with that neither you nor I can see.

If we can't find a satisfactory, clearly better default value then I don't like changing it again.

Reverting the default values is for the consideration of new and existing users of Joinmarket that are not aware of the Sybil risk. For users who are aware of the risk, they can choose to set the value to an acceptable value. Again, not changing something because it had been changed before doesn't hold water for me. Especially if it was changed to accomodate a temporary market condition (the lack of fidelity bond adoption).

To be clear, if this turns out not to be the case, would you still go with 2.0, or maybe a different number? Or to ask differently, if 1.3 was the default from the start, would you still want to change it?

Why wouldn't it be the case? A larger number of bonds means more makers to randomly choose from and less likelihood of clustering of top makers. But there should always be a strong incentive to consolidate multiple makers into one and reduce Sybil risk. I'm not tied to any number, but favouring 1.3 rather than 2.0 would seem to be favouring protecting against centralization risk versus Sybil risk. In my opinion the centralization risk is no longer as prevalent as the Sybil risk.

As I said earlier, I could understand the change then as it made sense with the market conditions. I also thought changing the bonded makers value made sense as well. In the current market conditions, I would recommend reverting both values.

If the top ones are bigger enough than the others, then potentially the large selection would be ignored and they would get an arguably disproportioned advantage. This might even be a kinda-good thing sometimes, as Adam mentioned, hard to say really. I'd love to see this simulated under different settings and market conditions. This would allow a much more informed decision, if one is at all possible.

If you have a larger number of makers, the 'top ones' would not be that much more bigger than the others. It would be more of a smooth line distribution instead of disproportionate with a lower number of makers. We can agree on wanting to see this simulated. However, in the interest of protecting against Sybil risk, which is the intention of fidelity bonds, I recommend reverting the default values.

jmmadn avatar Sep 13 '22 08:09 jmmadn

To be clear, if this turns out not to be the case, would you still go with 2.0, or maybe a different number? Or to ask differently, if 1.3 was the default from the start, would you still want to change it?

Why wouldn't it be the case?

EDIT: It wouldn't be the case because as said in this discussion multiple times other people have different ideas about how bad the sybil gets (i..e, how cheap it gets) with a value like 1.3, and they think it's still good enough. Are you considering this scenario at all?

Because your strongest argument, repeated all over this issue (multiple times just in your last comment), is based on the assumption that it was a temporary measure, and that 2.0 has always been considered a preferred value long-term (this last point at least has already been proved wrong by Belcher comment). If this is not the case tho, then we go back to "someone has to prove someone else wrong" to justify this change or any other change to this value, and this seems to be extremely tricky to do. Consider that market makers take economic decisions based on the default JM picks. Constantly changing it is horrible. It was changed the first time because it seemed a clear improvement, it has to be a even greater improvement now to change it again.

I'm also curious what would happen if we revert and two months from now someone else comes and says the opposite of what you are saying. I.e., "Too much centralization, it's not fair, the market has changed again, not enough bonds anymore, etc." If he's as strongly convinced as you, should we revert once more? How can we decide without more evidence to make the decision as clear-cut as possible (and what if such evidence won't ever present itself to us)? Should we change it regularly based on up-to-date market condition? On the other hand, if we wait instead and say something like "let's give more time to the market to mature before reverting" then it feels even more of a rug pull to change it after so many makers are probably considering it set in stone. To me this is a case where any non-extreme value is fine (this is also the argument made by Belcher), and in the absence of strong evidence we should let the market do its course with the current status quo.

If you have a larger number of makers, the 'top ones' would not be that much more bigger than the others

I hope you are right, but let's say we have between 400-1k makers (much more than JM average over the past 6-7 years), I don't see it at all improbable/unthinkable for the top 20-40 to be significantly bigger then the other hundreds of one. It's not something we can ever rule out since it's a social phenomenon.

PulpCattel avatar Sep 13 '22 10:09 PulpCattel

EDIT: It wouldn't be the case because as said in this discussion multiple times other people have different ideas about how bad the sybil gets (i..e, how cheap it gets) with a value like 1.3, and they think it's still good enough. Are you considering this scenario at all?

Re-framing it as 'good enough' is beside the point. The original value was part of the original design, and it would address Sybil risk better than the current value. I don't think anyone is disputing this. It's a centralization versus Sybil risk trade-off. Due to the current market adoption of fidelity bonds, the centralization risk in my opinion is not as prevalent.

Fidelity bonds were created to address Sybil risk. Again, I don't think anyone is disputing this. The original adjustment was made during a time when we noticed a lack of a fidelity bond adoption. Again, no dispute there. Regardless if you feel that it was a temporary change (which I believe it to be), the market has now changed and it is always worthwhile to re-evaluate a change that was made because of a specific condition (lack of fidelity bonds) and when that condition has changed (present day).

Because your strongest argument, repeated all over this issue (multiple times just in your last comment), is based on the assumption that it was a temporary measure, and that 2.0 has always been considered a preferred value long-term (this last point at least has already been proved wrong by Belcher comment). If this is not the case tho, then we go back to "someone has to prove someone else wrong" to justify this change or any other change to this value, and this seems to be extremely tricky to do.

Nice for you to pick that one out. I actually think my strongest arguments are:

  1. Reverting the changes protect the newer users of Joinmarket who are not aware of Sybil risk. For the experienced users who are aware of the risk (such as yourself), you can set the values to what you feel are 'adequate'.
  2. It boils down to a centralization versus Sybil risk argument. The risk of centralization is no longer as prevalent now that fidelity bonds are adopted by a larger number of makers. The Sybil risk remains and is possibly exacerbated by the lower exponent.

Regarding Chris' comment, let's let him speak for himself instead of using it as a 'gotcha' to prove someone wrong. I believe the original design and values were correct, but the design assumed a mature market with diversity in fidelity bonds. Of course when bonds were introduced, adoption was slow obviously. People noticed issues with some of the top makers receiving what they felt was a disproportionate number of coinjoins. These factors along with the lack of fidelity bonds led to the discussion for the initial adjustments. Whether or not you believe it to be a temporary change, it was made due to observations of the market. I don't think Chris suddenly decided he wanted to change the default values on a whim.

Consider that market makers take economic decisions based on the default JM picks.

Again, market makers that are aware of the risk can adjust the defaults. We want to account for the users that are not aware of the risks and don't even know what the default values are or how to adjust them.

Constantly changing it is horrible. It was changed the first time because it seemed a clear improvement, it has to be a even greater improvement now to change it again.

Why would it be 'horrible'? This is open source software. It... changes. And I certainly hope things continue to change if required. We already went through the 'let's not change anything because we changed it before' and put it to bed. Should we never adjust or change anything simply on the basis that it was changed previously? We should always evaluate where things are and adjust if need be. Who determines how 'great' an improvement it has to be to change it again?

I'm also curious what would happen if we revert and two months from now someone else comes and says the opposite of what you are saying. I.e., "Too much centralization, it's not fair, the market has changed again, not enough bonds anymore, etc."

Doubtful. Widespread adoption of fidelity bonds can't be reversed. The genie is out of the bottle. I don't forsee any scenarios in which centralization risk is increased with a growing userbase of Joinmarket and greater adoption of bonds.

On the other hand, if we wait instead and say something like "let's give more time to the market to mature before reverting" then it feels even more of a rug pull to change it after so many makers are probably considering it set in stone.

It's not a 'rug pull' anymore than the original change was a 'rug pull'. Nothing is set in stone with open-source software. But if you'd like it to be, then just never update the software.

To me this is a case where any non-extreme value is fine (this is also the argument made by Belcher), and in the absence of strong evidence we should let the market do its course with the current status quo.

Not sure why 'non-extreme' as a terminology is entering this discussion. Everything is on a spectrum and there are no 'extremes'.

As for 'strong evidence', we can never fully prove that x amount of makers are controlled by the same entity. What we know is that centralization risk is no longer as prevalent due to a growing adoption of users and bonds, so reverting the value should address Sybil risk better, as the original design intended. Letting the market deal with it is generally something I support, but requiring newer users of Joinmarket to manually adjust values upwards in joinmarket.cfg is not the best approach. It's much better for the market to start with a default that addresses Sybil risk better, and for the more knowledgeable users to adjust this value if they want to take on more Sybil risk.

I hope you are right, but let's say we have between 400-1k makers (much more than JM average over the past 6-7 years), I don't see it at all improbable/unthinkable for the top 20-40 to be significantly bigger then the other hundreds of one. It's not something we can ever rule out since it's a social phenomenon.

Can't rule it out obviously, but in the case where you'd have 400-1k bonded makers, remember that the number of makers in each coinjoin usually ranges from 7 to 10. There's very low likelihood that a 'top' maker would get a disproportionate number of coinjoins, just on the basis of the 'large crowd'. Bring that bonded maker count down to around 20 (which is what we had a few months ago), and the amount of regular coinjoins that maker gets is increased significantly.

jmmadn avatar Sep 14 '22 09:09 jmmadn

I thought a bit about how to answer because at this point it's not easy to follow a straight discussion. Hopefully this extra comment clarifies the discussion for other readers. This is also an interesting discussion to have in general so, why not.

TL;DR: I don't think your recommendation is sensible given what we know today. But it's certainly a valid opinion to have and we all hope someone eventually will prove it either way.

Let's summarize this issue:

Your OP (and this earlier comment of yours too) asks, and this is the only argument you make there:

Now that we will soon have more offers with bonds than without, should we consider switching back to the original exponent of 2 to reduce the likelihood of Sybil counterparties?

Both Adam and kristapsk tells you no, i.e., "Lowering exponent is different, it was to give higher weight for lower value bonds, not because of low fidelity bond adoption."

Note that your only strong argument there keeps being:

This tradeoff was accepted because the number of fidelity bonds in the market was not very large and most of the makers didn't have fidelity bonds.

As the change in the exponent was seemingly made for market conditions and to address fidelity bonds not fully 'arriving' yet, my recommendation would be to revert as that the market has 'caught up'.

This is already weird because those claims have already been refuted. I don't know why you feel so strong about that when the 2 lead developers are telling you that wasn't the case (plus the release note). But you clearly do, since this argument appears in literally every comment you make in this discussion. To the point I'm not sure there's anything else that can be said or written to change your mind about it.

In your first few comments you also start mentioning what you'll define later as "one of your strongest argument":

  1. Reverting the changes protect the newer users of Joinmarket who are not aware of Sybil risk

But this argument is based entirely on the "market has matured and therefore centralization is solved" assumption. Because otherwise I could argue: "Not reverting the changes protect the newer users of Joinmarket who are not aware of centralization risk". This already is a much more complicated discussion to have, where it's not easy to determine who's right and who's wrong.

Once this transition happens, i.e., now that the discussion is focused on your "second strongest argument" (the only one at this point, since the other are either been already refuted or they depend on this, and as you say "it boils down to this"):

  1. It boils down to a centralization versus Sybil risk argument. The risk of centralization is no longer as prevalent now that fidelity bonds are adopted by a larger number of makers. The Sybil risk remains and is possibly exacerbated by the lower exponent.

We move to "can you prove it?" discussion. To which you answered things like:

Because it's not about proving anyone wrong. One approach protects new makers who are not knowledgeable from Sybil risk in a better way. The centralization risk has been addressed naturally with the market maturing.

I don't subscribe to this - it seems very binary - 'for me to be right then you have to be wrong'. I think you make good points and I see your point of view. It's not all black and white.

I.e., no you can't. And this is fine, I can't prove my either at the moment, nor probably can anybody else involved here without further research (the whole discussion both here and in #1247 should have made clear it's not easy to come up with a clear cut answer. The consensus was 1.3 and 2.0 both offer good sybil resistance (as Belcher putted it "it still costs hundreds of thousands of bitcoins locked up for months") but 1.3 offers a much greater decentralization.)

All I'm trying to say is that the moment you change your argument from it was always meant to be reverted so it's only a matter of when to It boils down to a centralization versus Sybil risk argument. The risk of centralization is no longer as prevalent now that fidelity bonds are adopted by a larger number of makers. then it becomes something you (or someone else for you) have to prove. And I'd be delighted if it happens, because we could finally settle on a stable value once and for all. Otherwise, my point is that your recommendation is not valid nor sensible, nor I understand why you keep recommend it if you can't back it up with either evidence or other people support.

Some final points.

I don't think Chris suddenly decided he wanted to change the default values on a whim.

He didn't. Adam introduced the possibility of fractional exponents and he realized he didn't thought about that.

Again, market makers that are aware of the risk can adjust the defaults.

No, they can't. Note, makers not takers. Makers cannot change the default takers use. They have to decide how much to lock and for how long, and they do that also based on the default JM picks. Also, again, this argument (even if talking about takers) goes both ways. Takers can also increase their exponent if they think the sybil protection is not enough.

Doubtful. Widespread adoption of fidelity bonds can't be reversed.

It does not need to be reversed (tho, it can ofc be, bonds can expire, makers can decide it's not worth anymore, etc., etc.). I can make this claim now, it's too centralized, it's not fair. You do not have any strong evidence against this claim. That's the point. If I am as convinced as you are, we are in a Mexican standoff kind of situation, where nobody can convince the other.

Why would it be 'horrible'?

It's not a 'rug pull' anymore than the original change was a 'rug pull'

And in fact the first one was a rug pull, please re-read #1247, the rug pull problem was mentioned by multiple people. This is why I'm saying that it required strong consensus and evidence to justify it. See also the final recap in the PR that actually changed it. I can even accept the argument that it was a bad decision in retrospect, because 2.0 was probably not terrible and once we change it once (even if 1.3 is indeed a better fit and everyone was happy with it) we open the door to issue like this, and an endless debate. This happens specifically because it's kinda hard to reason about it objectively and a lot of work is needed to have solid evidence. I also hope Adam would have stopped pushing for it the moment multiple persons were to be against it/ not convinced (this didn't happen), given he didn't have strong evidence either. And I think he would have, just as he did for the bondless_maker_allowance.

Who determines how 'great' an improvement it has to be to change it again?

All the people that care and come here to provide evidence and data to justify it until it gathers strong consensus. I'm telling you, you did not provide any of that. I would have no problem if you presented it as your personal opinion, but the moment you keep strongly recommending it as if there's a strong case made for it (despite all the uncertainty presented to you by other people), it immediately sounds weird.

Not sure why 'non-extreme' as a terminology is entering this discussion. Everything is on a spectrum and there are no 'extremes'.

Yes, there are ofc. A value of 10 as exponent would be awful as default, and the argument has been 2.0 could already be too extreme to be justifiable given 1.3 already creates a disincentive to spread out the fidelity bond.

What we know is that centralization risk is no longer as prevalent due to a growing adoption of users and bonds

Please, prove it. I don't understand why you are so confident about something without evidence when no one else agrees with that. It's absolutely fair for you to have an opinion, and I appreciate that you seem to care a lot about this, but more than that is required.

PulpCattel avatar Sep 17 '22 11:09 PulpCattel