No way to disconnect from a subcluster
There is no subcluster.destroy/close() method or other obvious way to disconnect from a subcluster.
Even calling subcluster.removeListener() also doesn't seem to remove the listeners, so if I connect, disconnect and then reconnect each handler get called twice.
What OS are you using (uname -a, or Windows version)?
Darwin Garths-MacBook-Pro.local 23.2.0 Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:18 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6000 arm64
What version Socket Runtime are you using?
0.5.4 (97fa3f7c) Installation path: /Users/garth/Projects/beamer-p2p/node_modules/.pnpm/@[email protected]/node_modules/@socketsupply/socket-darwin-arm64/
What programming language are you using (C/C++/Go/Rust)?
What did you expect to see and what you saw instead?
There should be some way to disconnect from a socket subcluster.
socket.subclusters is a Map of event emitters. When a packet arrives, it looks for sub clusters (an event emitter) with the matching subclusterId. If you remove it and stop sending join packets for that sub cluster, eventually other peers will stop trying to send you as many packets for it. it should work to do something like...
const key = clusterId.toString('base64')
socket.subclusters.delete(key)
We could add a helper method for it. Something like this?
socket.unsubscribe(clusterId)
Edit: Actually I need to make a small change to add this API, I'll push up my PR today.
ok... but if you do something like
subcluster.on('someting', handler)
I would expect to be able to call
subcluster.off('someting', handler)
but after calling off the handler is still called.
The problem with that is you still have the event emitter, are you saying you just want to remove the event listener but keep the subcluster in case you decide to rejoin it?
I tried both. Since I didn't find a way to destroy, I created an object that subscribes on construction and then unsubscribes when destroyed.
At first I though that I was just seeing echos from the network when I emitted, so I added an instance id to each object so that when each handler was called I logged out the instance id and it turned out that old instances that had called off were still being called.
I could handle this with additional eventemitter layer at the application level, but it seems like a bug?
ah yeah the emitter is monkey patched so adding an off method wouldn't work, I can definitely make it work though where you keep the sub cluster, but just remove the event listener.