gun icon indicating copy to clipboard operation
gun copied to clipboard

Gun stops broadcasting data

Open boufni95 opened this issue 4 years ago • 5 comments

Hi, I've been running some tests with gun to try and reach its limits

I made a simple test 1 receiver node and N sender nodes

I posted the full set of tests here but it all boils down to this:

the N senders have the 1 receiver pub available, and they use it to put a random string in a set

user
    .get(receiverPub)
    .set(random)

the 1 receiver will retrieve all the senders' pubs from a public node and use it to listen for requests:

gun.user(pub).get(myPub).open(obj => {
        Object.values(obj).forEach( random => {
            if(!processedRandoms[random]){
                processedRandoms[random] = true
                user.get(random).put('ok')
            }
        })
    })

all the N senders will try to send a new request as soon as they get 'ok' from the old one

Now with N = 1 it works fine, it does a ton of requests per second: ~100 req/s but already with N = 2, I start having problems: after 5 or 6 seconds the 1 receiver will stop getting updates on requests from one of the two senders, and only one sender keeps working. Sometimes the one sender left keeps working until I quit the test, other times it stops too with higher N the problems start showing up before the 5/6 seconds mark Initially, I was testing with a local superpeer and multicast disabled, removing superpeer and enabling multicast helped, but not too much I didn't completely rule out an issue with node performance yet, but I'm sure enough is gun, not node the issue here. Did anyone have a similar issue?

boufni95 avatar Feb 10 '21 13:02 boufni95

i think there might be an issue with your code, causiing some recursive loops. why are you using .open? try using .on and with second argument {change: true} (see https://gun.eco/docs/API#-main-api- gun.on)

sirpy avatar Feb 10 '21 16:02 sirpy

I just tested using .on(...,{change: true}) but I get the same result

I'm not too sure about the loops issue, this is all the core N senders code, everything that comes before this is just for preparation I used Node event emitter to avoid the loop issues

myEmitter.on('sendReq', () => {
    const random = Crypto.randomBytes(6).toString('hex')
    const randomInt =  parseInt(random,16)
    sentSum += randomInt
    reqCount++
    latestSent = performance.now()
    user
    .get(receiverPub)
    .set(random)

    gun.user(receiverPub).get(random).on(data => {
        if(!data){
            return
        }
        console.log(`got res to ${random}`)
        console.log(data)
        const diff = performance.now() - latestSent
        confirmedSum += randomInt
        console.log(`SUM REPORT>${confirmedSum}:${sentSum}:${reqCount}:${performance.now() - StartTime}>`)
        if(diff >= Timeout){
            console.log('operation took longer than timeout')
            myEmitter.emit('sendReq')
        } else {
            setTimeout(() => {myEmitter.emit('sendReq')},Timeout - diff)
            console.log('got response will retry in '+(Timeout - diff))
        }
    })
})

boufni95 avatar Feb 10 '21 18:02 boufni95

submit the complete test code.. including receiver

sirpy avatar Feb 11 '21 07:02 sirpy

Sender: https://github.com/boufni95/gunstress/blob/master/basic/sender.js Receiver: https://github.com/boufni95/gunstress/blob/master/basic/receiver.js

This is the script I start the test with: https://github.com/boufni95/gunstress/blob/master/main.js

boufni95 avatar Feb 11 '21 14:02 boufni95

@boufni95 sounds like a good test idea! Be cool if we could add it to PANIC tests to automate it.

I'm usually able to squeeze out a few ten thousand writes before JSON parsing / single-threaded nature of JS starts wheezing on me, yeah peer number affects this wildly :/ tho starting last year I've created a new experimental branch in GUN where I implemented a CPU scheduler inside of GUN 🤣 to fix this sort of stuff. So I want to finish that then run your tests. Thanks for reporting!

amark avatar Feb 19 '21 21:02 amark