goloop
goloop copied to clipboard
Inconsistency between endpoints
When querying sejong API, I am not able to query contract method on my own nodes vs solidwallet nodes.
import requests
import json
url = "http://108.61.103.8:9010/api/v3"
payload = {
"jsonrpc": "2.0",
"id": 1,
"method": "icx_call",
"params": {
"to": "cx79a1d066c9bab28090d9fc621a8212ca681a3553",
"dataType": "call",
"data": {"method": "name"},
},
}
r = requests.post(url=url, data=json.dumps(payload)).json()
assert r['error']['message'] == 'SystemError(-30001): E0001:ProxyIsClosed'
url = "https://sejong.net.solidwallet.io/api/v3"
r = requests.post(url=url, data=json.dumps(payload)).json()
assert r['result'] == 'Snapshot'
Reported this before #99 but wanted to give a concrete example. Nothing is showing in the logs on the node side. It is able to respond to other requests such as icx_getScoreApi
on this contract and others but failing on this call for this contract and about 25% of all the other contracts on Sejong.
@robcxyz The problem seems to be related to your node configuration, as the issue only occurred on your node. Can you restart your node with GOLOOP_LOG_LEVEL=trace
to get more verbose logs? And let us know the logs when the problem occurred.
@sink772
Set logging to trace and looked through logs and not seeing any relevant information. Perhaps you can tell me what I should be looking for as from what I can tell this isn't showing in the logs. Does RPC traffic get logged internally?
Also this isn't just one node, this is 8 nodes all displaying the same behavior. And as far as the config, not much going on. Was advised by jinwoo to map ports but that shouldn't have anything to do with this issue.
environment:
SERVICE: SejongNet
GOLOOP_LOG_LEVEL: "trace"
FASTEST_START: "true"
ROLE: 0
IS_AUTOGEN_CERT: "true"
GOLOOP_P2P_LISTEN: ":7110"
GOLOOP_RPC_ADDR: ":9010"
@robcxyz If you are using https://github.com/icon-project/icon2-docker, the verbose log messages should appear in logs/goloop.log
. We need some clue for the situation. Can you show us some logs messages in goloop.log
for verification?
Actually I want to see the IPC log messages between goloop and JavaEE.
@sink772 - Sorry for the long radio silence on this. Had to side step the issue before but running into it again now. Here is the log during the request.
D|20221030-19:57:35.758320|e313|-|javaee|javaee.go:98 runInstances with uid(cff9c890-e465-4bde-93be-1887c0ec646d)
D|20221030-19:57:35.758352|e313|-|javaee|TRACE foundation.icon.ee.ipc.ManagerProxy [KILL] uuid=93742d58-5703-44fa-a470-45e9de6b4694
D|20221030-19:57:35.758477|e313|-|javaee|TRACE foundation.icon.ee.ipc.ManagerProxy [RUN] uuid=cff9c890-e465-4bde-93be-1887c0ec646d
I|20221030-19:57:35.758744|e313|-|EEP|manager.go:282 ExecutorManager.onEEConnect(type=java,version=1,uid=cff9c890-e465-4bde-93be-1887c0ec646d)
D|20221030-19:57:35.758780|e313|-|javaee|javaee.go:125 OnAttach uid(cff9c890-e465-4bde-93be-1887c0ec646d)
D|20221030-19:57:35.758979|e313|-|ipc|server.go:82 Fail to handle message err=EOF
github.com/icon-project/goloop/common/errors.WithStack
/work/common/errors/errors.go:108
github.com/icon-project/goloop/common/codec.(*mpReader).ReadList
/work/common/codec/msgpack.go:66
github.com/icon-project/goloop/common/codec.(*decoderImpl).decodeList
/work/common/codec/codec.go:410
github.com/icon-project/goloop/common/codec.(*decoderImpl).decodeValue
/work/common/codec/codec.go:693
github.com/icon-project/goloop/common/codec.(*decoderImpl).decode
/work/common/codec/codec.go:774
github.com/icon-project/goloop/common/codec.(*decoderImpl).Decode
/work/common/codec/codec.go:453
github.com/icon-project/goloop/common/codec.bytesWrapper.Unmarshal
/work/common/codec/bytes.go:19
github.com/icon-project/goloop/common/ipc.(*connection).HandleMessage
/work/common/ipc/connection.go:115
github.com/icon-project/goloop/common/ipc.(*server).handleConnection
/work/common/ipc/server.go:79
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1581
W|20221030-19:57:35.759018|e313|-|pyee|pyee.go:146 Instance uid=9ab5af05-59af-4f35-8f32-2ac87a5ab5e8 is killed err=signal: killed
I|20221030-19:57:35.759065|e313|-|pyee|pyee.go:172 start instance uid=f40ab630-f0a3-46d2-ab55-492688519944
I|20221030-19:57:35.845271|e313|-|EEP|manager.go:282 ExecutorManager.onEEConnect(type=python,version=1,uid=f40ab630-f0a3-46d2-ab55-492688519944)
Also this is my compose envs:
SERVICE: SejongNet
GOLOOP_LOG_LEVEL: "trace"
FASTEST_START: "true"
ROLE: 0
IS_AUTOGEN_CERT: "true"
GOLOOP_P2P_LISTEN: ":7110"
GOLOOP_RPC_ADDR: ":9010"
Both the GOLOOP_P2P_LISTEN and GOLOOP_RPC_ADDR should not have any effect on this bug .
Hello @robcxyz - I also almost forgot about this issue for a while. :)
Actually we have already found the root cause of this ProxyIsClosed
issue, and we've also applied patches for it since v1.2.9. The following items are relevant to the issue.
- Fix issue of ForceSync not synchronizing ObjectGraph
- Implement DataSyncer
By the way, here are the log messages related to the problem: I believe you also have the same log on your side, so next time please share the whole log file instead of just providing some.
D|20221031-06:45:59.447649|14e4|9e1477|SV|81865877|proxy.go:557 Failed to getObjGraph err(E2500:NoValueInHash(hash=0xf623ddb44cb106b91e4b95cedaa7d3d6e27a7e42881099ce511b1c506ef4aa49))
D|20221031-06:45:59.447769|14e4|-|ipc|server.go:82 Fail to handle message err=E2500:NoValueInHash(hash=0xf623ddb44cb106b91e4b95cedaa7d3d6e27a7e42881099ce511b1c506ef4aa49)
github.com/icon-project/goloop/common/errors.Errorcf
/work/common/errors/errors.go:185
github.com/icon-project/goloop/common/errors.Code.Errorf
/work/common/errors/errors.go:71
github.com/icon-project/goloop/service/state.(*objectGraph).Get
/work/service/state/objectgraph.go:124
github.com/icon-project/goloop/service/state.(*accountData).GetObjGraph
/work/service/state/account.go:201
github.com/icon-project/goloop/service/contract.(*CallHandler).GetObjGraph
/work/service/contract/callhandler.go:622
github.com/icon-project/goloop/service/eeproxy.(*proxy).HandleMessage
/work/service/eeproxy/proxy.go:555
github.com/icon-project/goloop/common/ipc.(*connection).HandleMessage
/work/common/ipc/connection.go:127
github.com/icon-project/goloop/common/ipc.(*server).handleConnection
/work/common/ipc/server.go:79
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1581
W|20221031-06:45:59.447930|14e4|9e1477|SV|callcontext.go:340 cleanUpFrames() TX=<> err=E1008:ProxyIsClosed
github.com/icon-project/goloop/common/errors.Errorc
/work/common/errors/errors.go:178
github.com/icon-project/goloop/common/errors.Code.New
/work/common/errors/errors.go:67
github.com/icon-project/goloop/service/eeproxy.(*proxy).OnClose
/work/service/eeproxy/proxy.go:637
github.com/icon-project/goloop/service/eeproxy.(*executorManager).OnClose.func1
/work/service/eeproxy/manager.go:169
github.com/icon-project/goloop/common.(*AutoCallLocker).Unlock
/work/common/mutex.go:59
github.com/icon-project/goloop/service/eeproxy.(*executorManager).OnClose
/work/service/eeproxy/manager.go:172
github.com/icon-project/goloop/common/ipc.(*server).handleConnection
/work/common/ipc/server.go:89
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1581
Lastly, I made your node (http://108.61.103.8:9010/api/v3
) force synced to the correct objgraph data. So from now on, your node will respond with the correct result.
Hey @sink772 - That fixed it thanks.
So that was the only part of the log that seemed to come back with a trace but noted on sharing whole log.
Can you let me know what you did to force sync as I have several other nodes with the same behavior. They are all updated to the latest version.
We've implemented DataSyncer to fill missing objgraph data since v1.2.9, when it fails to get it. But this logic will work only if your node is connected to the parent or uncle nodes which have the correct objgraph data. The reason your node wasn't getting the data properly from its parent or uncle nodes was because the data wasn't there either. So I did the following.
-
icx_call
to the parent node (54.178.25.57
) to fill the data first there - then
icx_call
to your node (108.61.103.8
)
Therefore, if you have several other nodes with the same issue, please follow the steps above.