SSH.NET
SSH.NET copied to clipboard
Performance issue to upload a file.
When I want to upload a file, I can either use the method SftpClient.UploadFile or I can use
using (Stream destinationStream = sftpClient.OpenWrite(destinationPath))
{
byte[] buffer = new byte[10 * 1024 * 1024]; // 10 MB buffer
long totalBytesCopied = 0;
int currentBlockSize = 0;
Progress(totalBytesCopied);
while ((currentBlockSize = fileStream.Read(buffer, 0, buffer.Length)) > 0)
{
destinationStream.Write(buffer, 0, currentBlockSize);
totalBytesCopied += currentBlockSize;
Progress(totalBytesCopied);
}
}
With SftpClient.UploadFile, there is no problem.
But, if I use the previous code, the performances are very, very, very... very... bad.
And I am sure "fileStream.Read" is not the problem.
The method "Progress" is just a Console.WriteLine.
It seems the SftpFileStream is problematic when we want to upload a file.
Can you please fix this problem ? Thank you very much.
What version of SSH.NET are you using? What is the buffer size of the SftpClient? Why are you using a 10 MB buffer?
Note that it's perfectly normal that UploadFile is a little faster as it reads and writes in a tight loop. Also, UploadFile reads from the input stream in chunks of (roughly) SftpClient.BufferSize, and immediately writes each chunk.
What results are you getting?
Hello,
I am using the latest version on nuget : v2016.1.0-beta4
Concerning the buffer size in the SftpClient, I don't change de size ; so, it's the default size. I will check on monday.
I am using a 10 MB buffer because I need to copy files of 15 GB or more. I read a very big block of data to reduce the number of "read" operations from the disk.
Today I had to copy a file. With UploadFile, to copy 10 MB it took approximately 10 seconds. With the other solution, it took more than 2 minutes.
Both UploadFile and SftpFileStream should be a lot faster than that (depending on your network). I'll also try to find time to do some comparisons.
I am having the same issue
A few weeks ago UploadFile worked quick (< 5 seconds) on transferring 4gb text file. Now it takes about 3-4 minutes to transfer. Any help?
What version(s) of SSH.NET did you test, and which one(s) were fast?
I have been using 2016.0.0 up until today after experiencing issues. I then updated to the latest 2016.1.0 and still experiencing same issue.
While using 106.0.0, the transfer was smooth. 9/30/2017 was the last time I seen a smooth transfer. A few days ago, I tested this and notice it had gotten really slow.
Hi, how to detect file uploading success/fail?
Hi, I am also seeing the similar performance issue when using SFTPStream with SSH.NET-2016.1.0 version. Uploading a 1GB file using the SFTPStream with 50 MB chunks takes around 30 mins. This time is specifically for writing to already got SFTPStream. Did anyone find a solution for it?
Hi, I am seeing very poor performance with 200 files around 10 MB total taking 8+ minutes to download (upload is similar performance). I'm attaching the code used for download. Using latest version - 2016.1.0. DownloadFiles.txt
Any update? I am facing same issue as described by @jimmygilles .
I did a little looking into this as well... The UploadFile implementation is very different. I think it makes optimizations because it knows it is going to upload an entire file all at once rather than possibly seeking around between writes. In UploadFile all the write requests are sent with a callback for the responses processing the responses asynchronously. When using OpenWrite it will sends an a non-null wait handle, causing each write to synchronously wait.
In my experimentation this caused at least an order of magnitude difference in performance.
namespace SftpTest
{
using System;
using System.Buffers;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using Renci.SshNet;
static class StreamExtensions
{
// We pick a value that is the largest multiple of 4096 that is still smaller than the large object heap threshold (85K).
// The CopyTo/CopyToAsync buffer is short-lived and is likely to be collected at Gen0, and it offers a significant
// improvement in Copy performance.
private const int DefaultCopyBufferSize = 81920;
public static void CopyTo(
this Stream source,
Stream destination,
Action<ulong> progress = null)
{
ulong totalBytes = 0;
var buffer = ArrayPool<byte>.Shared.Rent(DefaultCopyBufferSize);
try
{
int read;
while ((read = source.Read(buffer, 0, buffer.Length)) != 0)
{
if (progress != null)
{
totalBytes += (ulong)read;
progress(totalBytes);
}
destination.Write(buffer, 0, read);
}
}
finally
{
ArrayPool<byte>.Shared.Return(buffer);
}
}
}
class Program
{
private const string hostname = "test-server.example.com";
private const string username = "me";
private const int ProgressInterval = 1024 * 1024 * 10;
private static SftpClient ConnectedClient()
{
var home = System.Environment.GetFolderPath(
System.Environment.SpecialFolder.UserProfile);
var authenticationMethods = new List<AuthenticationMethod>();
authenticationMethods.Add(
new PrivateKeyAuthenticationMethod(
username,
new PrivateKeyFile(Path.Join(home, ".ssh", "id_rsa"))));
var client = new SftpClient(
new ConnectionInfo(hostname, 22, username, authenticationMethods.ToArray()));
client.Connect();
return client;
}
static void ErrorExit(string format, params object[] args)
{
System.Console.Error.WriteLine(format, args);
System.Environment.Exit(1);
}
static void UseCopyTo(string fromPath, string toPath)
{
try
{
using var client = ConnectedClient();
using var input = File.OpenRead(fromPath);
using var output = client.OpenWrite(toPath);
var stopwatch = new Stopwatch();
stopwatch.Start();
ulong nextProgress = 0;
input.CopyTo(
output,
(bytes) =>
{
if (bytes > nextProgress)
{
nextProgress += ProgressInterval;
Console.Error.WriteLine($"Wrote {bytes} bytes in {stopwatch.Elapsed}");
}
});
}
catch (Exception e)
{
ErrorExit("Failed: {0}\n{1}", e.Message, e.StackTrace.ToString());
}
}
static void UseUploadFile(string fromPath, string toPath)
{
try
{
using var client = ConnectedClient();
using var input = File.OpenRead(fromPath);
var stopwatch = new Stopwatch();
stopwatch.Start();
ulong nextProgress = 0;
client.UploadFile(
input,
toPath,
(bytes) =>
{
if (bytes > nextProgress)
{
nextProgress += ProgressInterval;
Console.Error.WriteLine($"Wrote {bytes} bytes in {stopwatch.Elapsed}");
}
});
}
catch (Exception e)
{
ErrorExit("Failed: {0}\n{1}", e.Message, e.StackTrace.ToString());
}
}
static void Main(string[] args)
{
var fromPath = args[0];
var toName = args[1];
if (string.IsNullOrEmpty(fromPath) || string.IsNullOrEmpty(toName))
{
ErrorExit("requires `dotnet run --project ./SftpTest.csproj -- <fromPath> <to>`");
}
var toPath = $"/tmp/{toName}";
System.Console.Out.WriteLine($"CopyTo writing {fromPath} to {toPath} on {hostname}");
UseCopyTo(fromPath, toPath);
System.Console.Out.WriteLine($"UploadFile writing {fromPath} to {toPath} on {hostname}");
UseUploadFile(fromPath, toPath);
}
}
}
Initial attempts at using OpenRead seem to indicate the same performance problems, but switching to DownloadFile does not alleviate them...
I have a similar issue, and tried to workaround it by writing my content into a MemoryStream instance and then by copy its content to the SftpStream.
using var stream = sftp.OpenWrite("destinationfile.txt");
using var buffer = new MemoryStream();
// writing my content to buffer
buffer.CopyTo(stream);
But this does not increases performances at all, so it's just a massive overhead in SftpStream.
Also seeing terrible uploadfile performance in latest nuget 2020.0.1, using mobaxterm I can upload 50x faster, and it only does 1 file at a time. Even tried multiple clients per file, same poor performance. There must be a global or singleton lock or bottleneck somewhere.
I've added two PRs today that improve SFTP performance (by a LOT, in my case), and simultaneously reduce CPU usage: #865 and #866. I'm not sure if the SCP file transfers also benefit, I didn't test/check that.
According to my benchmarks, both FileUpload and FileDownload have massive speed gains and are now comparable to Filezilla.
These changes are for SftpClient.UploadFile/DownloadFile and variants. The Stream/CopyTo versions will also benefit a little on the CPU-usage side, but since they are synchronous and don't queue requests, I don't think they'll benefit that much performance-wise. However, if you are using (like I was) the Stream versions because they support Resume, you might want to check out #864.
I'd be very interested to know if these changes also help you guys :)
Any chance of a beta nuget?
With so many PRs pending, I think that'll take a while :-/ It looks like @drieseng is the only active maintainer of this repo and unfortunately he doesn't seem to have that much time. This needs more maintainers, or eventually it will have to be forked.
Not to diminish his work in any way - a huge THANK YOU to you, Gert.
@zybexXL Hi, sorry. I guess the performance issue is still happening, right? As I see your PR is still pending.
@nguyenlamlll I've just rebased #865 and #866 to allow them to be merged, but that's still up to the maintainers. The performance issue still exists, and this still fixes it.
Those have both been merged now and https://github.com/sshnet/SSH.NET/issues/100 was closed. Should this issue also be closed? Or are there additional upload performance actions being planned?
I confirm that the latest version 2024.0.0 is too slow (SftpFileStream.Write)
I tried to upload >1GB file by chunks (10 MB) to an SFTP server in my region with 1GBit connection, avg upload speed was 250 KB/s while uploading same file with WinSCP app showed 3 MB/s speed (12 times faster!)