dht icon indicating copy to clipboard operation
dht copied to clipboard

lookup for an ID assigned in 2 nodes is only returned once

Open benoitc opened this issue 8 years ago • 7 comments

Following our discussion on slack: The doc says:

Multiple peers can store the same identity. In that case, each peer is reflected in the table.

But when I run the following example on 3 nodes (started locally) I only get one result:

1) Launch 3 nodes:

node 1:

1>  application:load(dht),
1>  application:set_env(dht, port, 1729).
ok
2>  application:ensure_all_started(dht).
{ok,[crypto,dht]}

node 2:

1>  application:load(dht),
1>  application:set_env(dht, port, 1730).
ok
2>  application:ensure_all_started(dht).
{ok,[crypto,dht]}

node 3:

1>  application:load(dht),
1>  application:set_env(dht, port, 1731).
ok
2>  application:ensure_all_started(dht).
{ok,[crypto,dht]}

2) Then node 2 and node 3 join node 1:

node2 & node3:

3> dht:ping({{127, 0, 0, 1}, 1729}).
{ok,164275074679377439255457697552131619293959778100}

3) register the ID 456 on node 1 and node 2

node1:

4> dht:enter(456, 5000).

node2:

4> dht:enter(456, 3000).

4) running lookup on different nodes return:

node1:

5> dht:lookup(456).
[{{127,0,0,1},3000}]

node2 & node3:

 5> dht:lookup(456).
 [{{127,0,0,1},5000}]

Where I would have expected on each nodes [{{127,0,0,1},3000},{{127,0,0,1},5000}] .

benoitc avatar Apr 27 '16 13:04 benoitc

I know what is going on at least, I think:

Rule #1: Each IP address can at most have a single port. Rule #2: The IP address is determined by the receiver because otherwise, you can't easily traverse NAT.

Together, these two observations at least describe what you are seeing. If that is the right behavior is more up in the air though. It is also what the model seems to encode.

jlouis avatar May 01 '16 09:05 jlouis

(1) is what i suspected. though i'm not sure how it works with (2). what if i have 2 machines behind a nat advertising an id on different ports? On Sun, 1 May 2016 at 11:58, Jesper Louis Andersen [email protected] wrote:

I know what is going on at least, I think:

Rule #1 https://github.com/jlouis/dht/pull/1: Each IP address can at most have a single port. Rule #2 https://github.com/jlouis/dht/pull/2: The IP address is determined by the receiver because otherwise, you can't easily traverse NAT.

Together, these two observations at least describe what you are seeing. If that is the right behavior is more up in the air though. It is also what the model seems to encode.

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/jlouis/dht/issues/14#issuecomment-216032101

benoitc avatar May 01 '16 18:05 benoitc

Yeah, that ought to work. I'll figure out what is wrong here, because it doesn't look right and it should be reproducible fairly easy.

jlouis avatar May 01 '16 21:05 jlouis

Ok, some investigation on this one:

The problem manifests itself because the way dht:lookup/1 operates:

  • Look up in your own store,
  • If not found in your own store, perform a DHT search.

In this case, the local lookup dominates since you have a local storage already. Hence the DHT search never happens and you don't get the expected return values.

As a workaround, you can do something like:

l(ID) ->
    Local = dht_store:find(ID),
    #{ found := Remote } = dht_search:run(find_value, ID),
    lists:usort(Local ++ Remote).

which will grab from both stores. But we need to figure out what the right API is in this situation, and also what the right return value ought to be.

(Edit: code that actually works)

jlouis avatar May 15 '16 13:05 jlouis

@jlouis thanks! that should fix it temporarily.

Why not dht:lookup/2 with the possibility to retrieve all the peers around that have this ID ?

Ex: dht:lookup(ID, [neighbours | {limit, N} | first])

dh:lookup/1 would by default to first like it is now. Thoughts?

benoitc avatar May 15 '16 14:05 benoitc

Yes. Taking options to the lookup function is definitely a good idea, so you can control how this works. I'll need to think a bit however, as it is not clear if it currently has the desired behavior.

jlouis avatar May 15 '16 14:05 jlouis

Hopefully the patch in #15 can fix the problem once and for all :)

jlouis avatar May 15 '16 14:05 jlouis