spatial
spatial copied to clipboard
Connection Refused
Hey guys I'm getting some connection refused while adding nodes to a layer. The layer is created using the /db/data/ext/SpatialPlugin/graphdb/addSimplePointLayer endpoint with a post method and this data:
{ "layer": "locations", "lat": "lat", "lon": "lon" }
Then to add a point to the layer I use /db/data/ext/SpatialPlugin/graphdb/addNodeToLayer with a post method and with this data (1234 is the node id):
{ "layer": "locations", "node": "http://localhost:7474/db/data/node/1234" }
This work just fine but after doing it a few thousand times I start getting connection refused. It is not related to the data, after 7000 items processed it breaks, if I skip the firsts 6000 it won't break close to the 1000. Does anyone have any idea what might be the issue? I have tried to enable http log but it didn't helped much.
The php exception message is this:
log.ERROR: cURL error 7: Failed to connect to localhost port 7474: Connection refused (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
From that link it states that error 7 is: Failed to connect() to host or proxy
This might be valuable information:
- Neo4j version: 3.0.4
- Spatial plugin version: 0.19 for neo4j 3.0.3
- Operating system: Mac OSX 10.10
- API/Driver: http
- Steps to reproduce: not sure, it is a bit volatile, maybe it is a matter of configuration. I will try to isolate the issue but if anyone have any tip it would be more than welcome.
- Expected behavior: It shouldn't refuse connections after a few thousand requests.
- Actual behavior: It refuse connections after a few thousand requests.
I have tried switching from the direct http to doing it using the cypher endpoint with procedures, it does a never ending call at item 708, and again if I skip the 600 first itens it won't fail at step 108. I have tried to put some delay between the calls but not joy.
Any chance you can share your data to reproduce? Also any log file showing errors?
Do you run the requests concurrently?
No requests concurrently, well the same strange way it appeared it has gone. If it happens again I will try to isolate it with fake data as I can't share the data I'm working with. The hard thing was exactly this, there was nothing on the logs. Anyway thanks for trying to help : )
Check both the logs/neo4j.log and logs/debug.log for possible errors. Also, if you cannot share your data, perhaps you can create a fake dataset that reproduces this?
From the regularity and the numbers I bet it triggers a rebalance that is slow enough to time out the http socket. Could you possibly print out the index nodes of the R tree just before it fails?
Sent from my iPhone
On 12 Aug 2016, at 16:06, Craig Taverner [email protected] wrote:
Check both the logs/neo4j.log and logs/debug.log for possible errors. Also, if you cannot share your data, perhaps you can create a fake dataset that reproduces this?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.
In fact. Depending on how the http endpoint is implemented, it's possible that you have concurrent modifications. It's a known bug with the current RTree implementation that a that concurrent writes, one of which triggers a rebalance, may result in a concurrent write failure due to the way the locks cascade during a rebalance.
Will the http endpoint accept a second request before the first is complete/tree fully modified?
Sent from my iPhone
On 10 Aug 2016, at 17:03, thiagomorato [email protected] wrote:
Hey guys I'm getting some connection refused while adding nodes to a layer. The layer is created using the /db/data/ext/SpatialPlugin/graphdb/addSimplePointLayer endpoint with a post method and this data:
{ "layer": "locations", "lat": "lat", "lon": "lon" }
Then to add a point to the layer I use /db/data/ext/SpatialPlugin/graphdb/addNodeToLayer with a post method and with this data (1234 is the node id):
{ "layer": "locations", "node": "http://localhost:7474/db/data/node/1234" }
This work just fine but after doing it a few thousand times I start getting connection refused. It is not related to the data, after 7000 items processed it breaks, if I skip the firsts 6000 it won't break close to the 1000. Does anyone have any idea what might be the issue? I have tried to enable http login but it didn't helped much.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.