Netshot icon indicating copy to clipboard operation
Netshot copied to clipboard

Fortigate Error. - Backups Not Working.

Open pizu opened this issue 5 years ago • 6 comments

2019-07-22 12:54:38,427 ERROR [QuartzScheduler_Worker-1] TakeSnapshotTask: Error while taking the snapshot.
javax.script.ScriptException: TypeError: Cannot read property "2" from undefined in <eval> at line number 206
        at jdk.nashorn.api.scripting.NashornScriptEngine.throwAsScriptException(NashornScriptEngine.java:470)
        at jdk.nashorn.api.scripting.NashornScriptEngine.invokeImpl(NashornScriptEngine.java:392)
        at jdk.nashorn.api.scripting.NashornScriptEngine.invokeFunction(NashornScriptEngine.java:190)
        at onl.netfishers.netshot.device.script.SnapshotCliScript.run(SnapshotCliScript.java:77)
        at onl.netfishers.netshot.device.script.CliScript.connectRun(CliScript.java:236)
        at onl.netfishers.netshot.device.script.CliScript.connectRun(CliScript.java:192)
        at onl.netfishers.netshot.work.tasks.TakeSnapshotTask.run(TakeSnapshotTask.java:200)
        at onl.netfishers.netshot.work.TaskJob.execute(TaskJob.java:90)
        at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: jdk.nashorn.internal.runtime.ECMAException: TypeError: Cannot read property "2" from undefined
        at jdk.nashorn.internal.runtime.ECMAErrors.error(ECMAErrors.java:57)
        at jdk.nashorn.internal.runtime.ECMAErrors.typeError(ECMAErrors.java:213)
        at jdk.nashorn.internal.runtime.ECMAErrors.typeError(ECMAErrors.java:185)
        at jdk.nashorn.internal.runtime.ECMAErrors.typeError(ECMAErrors.java:172)
        at jdk.nashorn.internal.runtime.Undefined.get(Undefined.java:157)
        at jdk.nashorn.internal.scripts.Script$Recompilation$3$3834AAA$\^eval\_.snapshot(<eval>:206)
        at jdk.nashorn.internal.scripts.Script$Recompilation$89$795AAAA$\^eval\_._connect(<eval>:439)
        at jdk.nashorn.internal.runtime.ScriptFunctionData.invoke(ScriptFunctionData.java:645)
        at jdk.nashorn.internal.runtime.ScriptFunction.invoke(ScriptFunction.java:494)
        at jdk.nashorn.internal.runtime.ScriptRuntime.apply(ScriptRuntime.java:393)
        at jdk.nashorn.api.scripting.ScriptObjectMirror.callMember(ScriptObjectMirror.java:199)
        at jdk.nashorn.api.scripting.NashornScriptEngine.invokeImpl(NashornScriptEngine.java:386)
        ... 8 common frames omitted

pizu avatar Jul 22 '19 10:07 pizu

went back to Fortigate v3.1 (v0.12.7) and working again.

pizu avatar Jul 25 '19 07:07 pizu

Cannot seen how this can happen. Which Java version are you using to run Netshot? java -version

SCadilhac avatar Jul 25 '19 21:07 SCadilhac

openjdk version "1.8.0_212" OpenJDK Runtime Environment (build 1.8.0_212-b04) OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)

pizu avatar Jul 30 '19 13:07 pizu

Hi,

Do you have any updates? please

pizu avatar Sep 03 '19 15:09 pizu

Could you try the following driver, and provide the collected debug logs? https://gist.github.com/SCadilhac/5b8b193ec9a8795476ed5833ad905cd6 Please check that you have 4.1-debug in Admin tab for the Fortinet driver before running the snapshot on the target device.

SCadilhac avatar Sep 12 '19 15:09 SCadilhac

Hi @SCadilhac that did the trick, it worked.

Debug Logs:

[INFO] Snapshot task for device NodeA (172.16.60.250). [DEBUG] getHa = HA Health Status: OK Model: FortiGate-1500D Mode: HA A-P Group: 0 Debug: 0 Cluster Uptime: 14 days 12:25:39 Cluster state change time: 2019-08-30 02:20:56 Master selected using: <2019/08/30 02:20:56> FG1K5D3I15800479 is selected as the master because it has the largest value of override priority. <2019/08/30 02:17:51> FG1K5D3I15800479 is selected as the master because it's the only member in the cluster. <2019/08/30 02:17:43> FG1K5D3I15800479 is selected as the master because the peer member FG1K5D3I15800783 has SET_AS_SLAVE flag set. <2019/08/30 02:16:17> FG1K5D3I15800783 is selected as the master because it has the largest value of uptime. ses_pickup: enable, ses_pickup_delay=disable override: disable Configuration Status: FG1K5D3I15800479(updated 3 seconds ago): in-sync FG1K5D3I15800783(updated 4 seconds ago): in-sync System Usage stats: FG1K5D3I15800479(updated 3 seconds ago): sessions=337104, average-cpu-user/nice/system/idle=8%/0%/7%/79%, memory=60% FG1K5D3I15800783(updated 4 seconds ago): sessions=50112, average-cpu-user/nice/system/idle=0%/0%/1%/98%, memory=41% HBDEV stats: FG1K5D3I15800479(updated 3 seconds ago): port1: physical/1000full, up, rx-bytes/packets/dropped/errors=12393019807/52404834/0/0, tx=23224106797/108397632/0/0 port9: physical/1000full, up, rx-bytes/packets/dropped/errors=3843146817/6259185/0/0, tx=3974749839/6259451/0/0 FG1K5D3I15800783(updated 4 seconds ago): port1: physical/1000full, up, rx-bytes/packets/dropped/errors=23270600415/109055796/0/0, tx=12319816293/51695404/0/0 port9: physical/1000full, up, rx-bytes/packets/dropped/errors=3974417375/6258925/0/0, tx=3842503486/6258149/0/0 SessionSync dev stats: FG1K5D3I15800479(updated 3 seconds ago): port16: physical/1000full, up, rx-bytes/packets/dropped/errors=118801458770/948097969/0/0, tx=1030672479712/5295071601/0/0 FG1K5D3I15800783(updated 4 seconds ago): port16: physical/1000full, up, rx-bytes/packets/dropped/errors=1030665493736/5295007876/0/0, tx=118760236650/947828234/0/0 MONDEV stats: FG1K5D3I15800479(updated 3 seconds ago): Link_2_Edge_RTR: aggregate/00, up, rx-bytes/packets/dropped/errors=100443230598955/88693911497/0/0, tx=7688409140857/12271589344/0/0 Link_2_Nexus: aggregate/00, up, rx-bytes/packets/dropped/errors=455563229157615/478700595314/0/0, tx=574080367592927/566214361060/0/0 FG1K5D3I15800783(updated 4 seconds ago): Link_2_Edge_RTR: aggregate/00, up, rx-bytes/packets/dropped/errors=569218303164/852960980/0/0, tx=10656768/83256/0/0 Link_2_Nexus: aggregate/00, up, rx-bytes/packets/dropped/errors=49048783285/499723725/0/0, tx=10657024/83258/0/0 Master: NameA, FG1K5D3I15800479, HA cluster index = 1 Slave : NameB , FG1K5D3I15800783, HA cluster index = 0 number of vcluster: 1 vcluster 1: work 169.254.0.2 Master: FG1K5D3I15800479, HA operating index = 0 Slave : FG1K5D3I15800783, HA operating index = 1

pizu avatar Sep 13 '19 12:09 pizu