New-SFTPSession : Socket read operation has timed out after 30000 milliseconds.
Hello,
Lately i'm getting errors when using New-SFTPsession .
My code: $ThisSession = New-SFTPSession -ComputerName $SftpIp -Credential $Credential -AcceptKey:$true -Force -ConnectionTimeout 200 -OperationTimeout 200
The error that i'm getting this time and can't seem to figure it out is : "New-SFTPSession : Socket read operation has timed out after 200000 milliseconds".
I have tried the -Force option and setting the timeout to 200. I also checked if my Azure server/connection and this workes fine.
Really could use some help with this, thanks.
that is a super large time out, why such a large timeout?
I am having very similar issue: My TimeOut is set to 30
My command is such:
Establish the SFTP connection
$SFTPSession = New-SFTPSession -ConnectionTimeout $TimeOut -ComputerName $RemoteServer -Credential $Credential -AcceptKey -Verbose -Debug
I get the error: Socket Read operation has timed out after 30000 milliseconds.... HOWEVER..... I put a stop watch to this and any value set in TimeOut above 20 will always fail at 20 seconds. It seems - that default value works and errors at 10 seconds, I tried 15 second timeout value and timed out at 15 seconds. but anything greater than 20 - doesn't matter. It always stops at 20 seconds. is this by design? Is there a way to up this value in the code? I have a lot of hops and hoops to jump though behind corp firewall etc - connection through VPN tunnel, etc., etc... that takes awhile for me to reach and connect to my host (longer than 20 seconds to get a reply)
I am having a similar issue. Seems like it ignores any setting above 20s
The library is not a good one for low latency connections. Why such a large timeout? Are you running the latest version of the module recently some of the timeout parameters where fixed since they would not set the proper value.
in my case, i updated to 3.0.8 and still had the timeout at 20s. i have about 30 jobs that use posh-ssh, 10 of which connect to the exact same server, and only these two jobs which run at different times along side some other jobs have this issue.
i simply put in a while loop to try 10 attempts and am waiting for the next failure. i haven't been able to find a pattern, as the jobs run fine if i just run them again, but they were still ignoring the timeout. i was only setting to 60s. will note if something different pops up here or i have some other useful data.
in my case, i updated to 3.0.8 and still had the timeout at 20s. i have about 30 jobs that use posh-ssh, 10 of which connect to the exact same server, and only these two jobs which run at different times along side some other jobs have this issue.
i simply put in a while loop to try 10 attempts and am waiting for the next failure. i haven't been able to find a pattern, as the jobs run fine if i just run them again, but they were still ignoring the timeout. i was only setting to 60s. will note if something different pops up here or i have some other useful data.
Not sure why, but after I put in a simple retry loop, I haven't had any more timeouts. I was expecting to tweak this a bit, but maybe adding the sleep timer just before is having some effect. In my actual code, i echo out the attempt number and it's worked every time on the first since I put this loop in. Before it was just the $s line. Some pseudo code below.
$i = 0
while(!$s -and $i -lt 10) {
$i++
Start-Sleep 1
$s = New-SFTPSession #myargs
}
I closed the bug I had open in my job repo that used this until I have at least one more failure. For now I'll assume maybe powershell needed a brief pause before running new-sftpsession.
Hey Just came back to the thread - @darkoperator - No - I have not updated lately - I might give that a try to see - thanks!
same issue = maximum timeout is 20 seconds, no matter what the value is set to above that
On the server side is it allowing those higher numbers by the
ClientAliveInterval ClientAliveCountMax Being set?